Sample records for hypercube sampling lhs

  1. A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Wyss, Gregory Dane

    2004-07-01

    This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a librarymore » that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.« less

  2. Monitoring and identification of spatiotemporal landscape changes in multiple remote sensing images by using a stratified conditional Latin hypercube sampling approach and geostatistical simulation.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh

    2011-06-01

    This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.

  3. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  4. Extension of latin hypercube samples with correlated variables.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hora, Stephen Curtis; Helton, Jon Craig; Sallaberry, Cedric J. PhD.

    2006-11-01

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number ofmore » model evaluations.« less

  5. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    NASA Astrophysics Data System (ADS)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.

  6. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  7. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  8. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  9. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  10. Comparison of Drainmod Based Watershed Scale Models

    Treesearch

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2004-01-01

    Watershed scale hydrology and water quality models (DRAINMOD-DUFLOW, DRAINMOD-W, DRAINMOD-GIS and WATGIS) that describe the nitrogen loadings at the outlet of poorly drained watersheds were examined with respect to their accuracy and uncertainty in model predictions. Latin Hypercube Sampling (LHS) was applied to determine the impact of uncertainty in estimating field...

  11. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  12. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  13. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  14. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  15. Soil organic carbon content assessment in a heterogeneous landscape: comparison of digital soil mapping and visible and near Infrared spectroscopy approaches

    NASA Astrophysics Data System (ADS)

    Michot, Didier; Fouad, Youssef; Pascal, Pichelin; Viaud, Valérie; Soltani, Inès; Walter, Christian

    2017-04-01

    This study aims are: i) to assess SOC content distribution according to the global soil map (GSM) project recommendations in a heterogeneous landscape ; ii) to compare the prediction performance of digital soil mapping (DSM) and visible-near infrared (Vis-NIR) spectroscopy approaches. The study area of 140 ha, located at Plancoët, surrounds the unique mineral spring water of Brittany (Western France). It's a hillock characterized by a heterogeneous landscape mosaic with different types of forest, permanent pastures and wetlands along a small coastal river. We acquired two independent datasets: j) 50 points selected using a conditioned Latin hypercube sampling (cLHS); jj) 254 points corresponding to the GSM grid. Soil samples were collected in three layers (0-5, 20-25 and 40-50cm) for both sampling strategies. SOC content was only measured in cLHS soil samples, while Vis-NIR spectra were measured on all the collected samples. For the DSM approach, a machine-learning algorithm (Cubist) was applied on the cLHS calibration data to build rule-based models linking soil carbon content in the different layers with environmental covariates, derived from digital elevation model, geological variables, land use data and existing large scale soil maps. For the spectroscopy approach, we used two calibration datasets: k) the local cLHS ; kk) a subset selected from the regional spectral database of Brittany after a PCA with a hierarchical clustering analysis and spiked by local cLHS spectra. The PLS regression algorithm with "leave-one-out" cross validation was performed for both calibration datasets. SOC contents for the 3 layers of the GSM grid were predicted using the different approaches and were compared with each other. Their prediction performance was evaluated by the following parameters: R2, RMSE and RPD. Both approaches led to satisfactory predictions for SOC content with an advantage for the spectral approach, particularly as regards the pertinence of the variation range.

  16. An uncertainty analysis of the flood-stage upstream from a bridge.

    PubMed

    Sowiński, M

    2006-01-01

    The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.

  17. SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Convergence of the Uncertainty Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie

    2014-02-01

    This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less

  18. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Jim Bouchard

    Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less

  20. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  1. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  2. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  3. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  4. [Uncertainty characterization approaches for ecological risk assessment of polycyclic aromatic hydrocarbon in Taihu Lake].

    PubMed

    Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian

    2012-04-01

    Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.

  5. Probabilistic Modeling of Intracranial Pressure Effects on Optic Nerve Biomechanics

    NASA Technical Reports Server (NTRS)

    Ethier, C. R.; Feola, Andrew J.; Raykin, Julia; Myers, Jerry G.; Nelson, Emily S.; Samuels, Brian C.

    2016-01-01

    Altered intracranial pressure (ICP) is involved/implicated in several ocular conditions: papilledema, glaucoma and Visual Impairment and Intracranial Pressure (VIIP) syndrome. The biomechanical effects of altered ICP on optic nerve head (ONH) tissues in these conditions are uncertain but likely important. We have quantified ICP-induced deformations of ONH tissues, using finite element (FE) and probabilistic modeling (Latin Hypercube Simulations (LHS)) to consider a range of tissue properties and relevant pressures.

  6. Assessing Degree of Susceptibility to Landslide Hazard

    NASA Astrophysics Data System (ADS)

    Sheridan, M. F.; Cordoba, G. A.; Delgado, H.; Stefanescu, R.

    2013-05-01

    The modeling of hazardous mass flows, both dry and water saturated, is currently an area of active research and several stable models have now emerged that have differing degrees of physical and mathematical fidelity. Models based on the early work of Savage and Hutter (1989) assume that very large dense granular flows could be modeled as incompressible continua governed by a Coulomb failure criterion. Based on this concept, Patra et al. (2005) developed a code for dry avalanches, which proposes a thin layer mathematical model similar to shallow-water equations. This concept was implemented in the widely-used TITAN2D program, which integrates the shock-capturing Godunov solution methodology for the equation system. We propose a method to assess the susceptibility of specific locations susceptible to landslides following heavy tephra fall using the TIATN2D code. Successful application requires that the range of several uncertainties must be framed in the selection of model input data: 1) initial conditions, like volume and location of origin of the landslide, 2) bed and internal friction parameters and 3) digital elevation model (DEM) uncertainties. Among the possible ways of coping with these uncertainties, we chose to use Latin Hypercube Sampling (LHS). This statistical technique reduces a computationally intractable problem to such an extent that is it possible to apply it, even with current personal computers. LHS requires that there is only one sample in each row and each column of the sampling matrix, where each row (multi-dimensional) corresponds to each uncertainty. LHS requires less than 10% of the sample runs needed by Monte Carlo approaches to achieve a stable solution. In our application LHS output provides model sampling for 4 input parameters: initial random volumes, UTM location (x and y), and bed friction. We developed a simple Octave script to link the output of LHS with TITAN2D. In this way, TITAN2D can run several times with successively different initial conditions provided by the LHC routine. Finally the set of results from TITAN2D are computed to obtain the distribution of maximum exceedance probability given that a landslide occurs at a place of interest. We apply this method to find sectors least prone to be affected by landslides, in a region along the Panamerican Highway in the southern part of Colombia. The goal of such a study is to provide decision makers to improve their assessments regarding permissions for development along the highway.

  7. Erosion Modeling in Central China - Soil Data Acquisition by Conditioned Latin Hypercube Sampling and Incorporation of Legacy Data

    NASA Astrophysics Data System (ADS)

    Stumpf, Felix; Schönbrodt-Stitt, Sarah; Schmidt, Karsten; Behrens, Thorsten; Scholten, Thomas

    2013-04-01

    The Three Gorges Dam at the Yangtze River in Central China outlines a prominent example of human-induced environmental impacts. Throughout one year the water table at the main river fluctuates about 30m due to impoundment and drainage activities. The dynamic water table implicates a range of georisks such as soil erosion, mass movements, sediment transport and diffuse matter inputs into the reservoir. Within the framework of the joint Sino-German project YANGTZE GEO, the subproject "Soil Erosion" deals with soil erosion risks and sediment transport pathways into the reservoir. The study site is a small catchment (4.8 km²) in Badong, approximately 100 km upstream the dam. It is characterized by scattered plots of agricultural landuse and resettlements in a largely wooded, steep sloping and mountainous area. Our research is focused on data acquisition and processing to develop a process-oriented erosion model. Hereby, area-covering knowledge of specific soil properties in the catchment is an intrinsic input parameter. This will be acquired by means of digital soil mapping (DSM). Thereby, soil properties are estimated by covariates. The functions are calibrated by soil property samples. The DSM approach is based on an appropriate sample design, which reflects the heterogeneity of the catchment, regarding the covariates with influence on the relevant soil properties. In this approach the covariates, processed by a digital terrain analysis, are outlined by the slope, altitude, profile curvature, plane curvature, and the aspect. For the development of the sample design, we chose the Conditioned Latin Hypercube Sampling (cLHS) procedure (Minasny and McBratney, 2006). It provides an efficient method of sampling variables from their multivariate distribution. Thereby, a sample size n from multiple variables is drawn such that for each variable the sample is marginally maximally stratified. The method ensures the maximal stratification by two features: First, number of strata equals the sample size n and secondly, the probability of falling in each of the strata is n-¹ (McKay et al., 1979). We extended the classical cLHS with extremes (Schmidt et al., 2012) approach by incorporating legacy data of previous field campaigns. Instead of identifying precise sample locations by CLHS, we demarcate the multivariate attribute space of the samples based on the histogram borders of each stratum. This widens the spatial scope of the actual CLHS sample locations and allows the incorporation of legacy data lying within that scope. Furthermore, this approach provides an extended potential regarding the accessibility of sample sites in the field.

  8. Validation of heat transfer, thermal decomposition, and container pressurization of polyurethane foam using mean value and Latin hypercube sampling approaches

    DOE PAGES

    Scott, Sarah N.; Dodd, Amanda B.; Larsen, Marvin E.; ...

    2014-12-09

    In this study, polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. It can be advantageous to surround objects of interest, such as electronics, with foams in a hermetically sealed container in order to protect them from hostile environments or from accidents such as fire. In fire environments, gas pressure from thermal decomposition of foams can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of polymeric methylene diisocyanate (PMDI)-polyether-polyol based polyurethane foam is presented and compared to experimental results to assess the validity of a 3-D finite element model of themore » heat transfer and degradation processes. In this series of experiments, 320 kg/m 3 PMDI foam in a 0.2 L sealed steel container is heated to 1,073 K at a rate of 150 K/min. The experiment ends when the can breaches due to the buildup of pressure. The temperature at key location is monitored as well as the internal pressure of the can. Both experimental uncertainty and computational uncertainty are examined and compared. The mean value method (MV) and Latin hypercube sampling (LHS) approach are used to propagate the uncertainty through the model. The results of the both the MV method and the LHS approach show that while the model generally can predict the temperature at given locations in the system, it is less successful at predicting the pressure response. Also, these two approaches for propagating uncertainty agree with each other, the importance of each input parameter on the simulation results is also investigated, showing that for the temperature response the conductivity of the steel container and the effective conductivity of the foam, are the most important parameters. For the pressure response, the activation energy, effective conductivity, and specific heat are most important. The comparison to experiments and the identification of the drivers of uncertainty allow for targeted development of the computational model and for definition of the experiments necessary to improve accuracy.« less

  9. Validation of heat transfer, thermal decomposition, and container pressurization of polyurethane foam using mean value and Latin hypercube sampling approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Sarah N.; Dodd, Amanda B.; Larsen, Marvin E.

    In this study, polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. It can be advantageous to surround objects of interest, such as electronics, with foams in a hermetically sealed container in order to protect them from hostile environments or from accidents such as fire. In fire environments, gas pressure from thermal decomposition of foams can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of polymeric methylene diisocyanate (PMDI)-polyether-polyol based polyurethane foam is presented and compared to experimental results to assess the validity of a 3-D finite element model of themore » heat transfer and degradation processes. In this series of experiments, 320 kg/m 3 PMDI foam in a 0.2 L sealed steel container is heated to 1,073 K at a rate of 150 K/min. The experiment ends when the can breaches due to the buildup of pressure. The temperature at key location is monitored as well as the internal pressure of the can. Both experimental uncertainty and computational uncertainty are examined and compared. The mean value method (MV) and Latin hypercube sampling (LHS) approach are used to propagate the uncertainty through the model. The results of the both the MV method and the LHS approach show that while the model generally can predict the temperature at given locations in the system, it is less successful at predicting the pressure response. Also, these two approaches for propagating uncertainty agree with each other, the importance of each input parameter on the simulation results is also investigated, showing that for the temperature response the conductivity of the steel container and the effective conductivity of the foam, are the most important parameters. For the pressure response, the activation energy, effective conductivity, and specific heat are most important. The comparison to experiments and the identification of the drivers of uncertainty allow for targeted development of the computational model and for definition of the experiments necessary to improve accuracy.« less

  10. Uncertainty and sensitivity analysis of the basic reproduction number of diphtheria: a case study of a Rohingya refugee camp in Bangladesh, November–December 2017

    PubMed Central

    Matsuyama, Ryota; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya

    2018-01-01

    Background A Rohingya refugee camp in Cox’s Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R0. Methods A renewal process model was devised to estimate the R0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. Results R0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R0 would become smaller with greater variance of the generation time. Discussion Estimated R0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified. PMID:29629244

  11. Uncertainty and sensitivity analysis of the basic reproduction number of diphtheria: a case study of a Rohingya refugee camp in Bangladesh, November-December 2017.

    PubMed

    Matsuyama, Ryota; Akhmetzhanov, Andrei R; Endo, Akira; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya; Nishiura, Hiroshi

    2018-01-01

    A Rohingya refugee camp in Cox's Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R 0 . A renewal process model was devised to estimate the R 0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. R 0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R 0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R 0 would become smaller with greater variance of the generation time. Estimated R 0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified.

  12. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  13. Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique

    NASA Astrophysics Data System (ADS)

    Mahootchi, M.; Fattahi, M.; Khakbazan, E.

    2011-11-01

    This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.

  14. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  15. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  16. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  17. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  18. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    PubMed

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.

  19. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Maolin; Du, R.

    2005-08-01

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.

  20. Parameter Uncertainty on AGCM-simulated Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    He, F.

    2015-12-01

    This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.

  1. Non-intrusive reduced order modeling of nonlinear problems using neural networks

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Ubbiali, S.

    2018-06-01

    We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.

  2. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGES

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  3. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  4. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  5. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    NASA Astrophysics Data System (ADS)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  6. Assessment of the transportation route of oversize and excessive loads in relation to the load-bearing capacity of existing bridges

    NASA Astrophysics Data System (ADS)

    Doležel, Jiří; Novák, Drahomír; Petrů, Jan

    2017-09-01

    Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.

  7. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  8. Critical Concentration Ratio for Solar Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    ur Rehman, Naveed; Siddiqui, Mubashir Ali

    2016-10-01

    A correlation for determining the critical concentration ratio (CCR) of solar concentrated thermoelectric generators (SCTEGs) has been established, and the significance of the contributing parameters is discussed in detail. For any SCTEG, higher concentration ratio leads to higher temperatures at the hot side of modules. However, the maximum value of this temperature for safe operation is limited by the material properties of the modules and should be considered as an important design constraint. Taking into account this limitation, the CCR can be defined as the maximum concentration ratio usable for a particular SCTEG. The established correlation is based on factors associated with the material and geometric properties of modules, thermal characteristics of the receiver, installation site attributes, and thermal and electrical operating conditions. To reduce the number of terms in the correlation, these factors are combined to form dimensionless groups by applying the Buckingham Pi theorem. A correlation model containing these groups is proposed and fit to a dataset obtained by simulating a thermodynamic (physical) model over sampled values acquired by applying the Latin hypercube sampling (LHS) technique over a realistic distribution of factors. The coefficient of determination and relative error are found to be 97% and ±20%, respectively. The correlation is validated by comparing the predicted results with literature values. In addition, the significance and effects of the Pi groups on the CCR are evaluated and thoroughly discussed. This study will lead to a wide range of opportunities regarding design and optimization of SCTEGs.

  9. Knowing Our Neighbors: Two In and One Out

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer L.; Lurie, John C.; Ianna, Philip A.; Riedel, Adric R.; Winters, Jennifer G.; Finch, Charlie T.; Jao, Wei-Chun; Subasavage, John P.; Henry, Todd J.

    2016-01-01

    Obtaining a well-understood, volume-limited (and ultimately volume-complete) sample of nearby stars is necessary for determining a host of interesting astrophysical quantities, including the stellar luminosity and mass functions, the distribution of stellar velocities, and the stellar multiplicity fraction. Furthermore, such a sample provides insight into the local star formation history. Towards that end, the Research Consortium on Nearby Stars (RECONS) team measures trigonometric parallaxes to establish which systems truly lie within the Solar Neighborhood, emphasizing those within 25 pc. Less accurate photometric and spectroscopic estimates previously suggested LP 991-84, LHS 6167AB, and LHS 2880 as possible members of the 10-pc sample, which made them desirable candidates.Recent measurements with the CTIO/SMARTS 0.9-m telescope place LP 991-84 and LHS 6167AB at 8.6 ± 0.1 and 9.68 ± 0.09 pc, respectively, which is consistent with earlier estimates of their distances. However, the final parallax for LHS 2880 places it beyond 25 pc, at a distance of 31 ± 1 pc, despite its previous identification as a member of the 10-pc sample. The proper motions of these systems are 0.2589 ± 0.0004, 0.4394 ± 0.0003, and 0.7113 ± 0.0009 arcsec yr-1, respectively.To characterize these systems more fully, the RECONS team obtained VRI photometry for each; they range from 13.8-14.5 mag. in V. LHS 6167AB also shows signs of long-term variability in V, including a possible flare. In addition, CTIO 1.5-m spectroscopy identifies LP 991-84 and LHS 6167AB as M4.5 V systems.Even as we aspire to identify all the stars within 25 pc, we find that the 10-pc volume still holds some surprises.NSF grants AST 05-07711 and AST 09-08402, NASA-SIM, Georgia State University, the University of Virginia, Hampden-Sydney College, and the Levinson Fund of the Peninsula Community Foundation supported this research. CTIOPI was an NOAO Survey Program and continues as part of the SMARTS Consortium. We thank the SMARTS Consortium and the CTIO staff, who enable the small telescope operations at CTIO.

  10. Screening for older emergency department inpatients at risk of prolonged hospital stay: the brief geriatric assessment tool.

    PubMed

    Launay, Cyrille P; de Decker, Laure; Kabeshova, Anastasiia; Annweiler, Cédric; Beauchet, Olivier

    2014-01-01

    The aims of this study were 1) to confirm that combinations of brief geriatric assessment (BGA) items were significant risk factors for prolonged LHS among geriatric patients hospitalized in acute care medical units after their admission to the emergency department (ED); and 2) to determine whether these combinations of BGA items could be used as a prognostic tool of prolonged LHS. Based on a prospective observational cohort design, 1254 inpatients (mean age ± standard deviation, 84.9±5.9 years; 59.3% female) recruited upon their admission to ED and discharged in acute care medical units of Angers University Hospital, France, were selected in this study. At baseline assessment, a BGA was performed and included the following 6 items: age ≥85years, male gender, polypharmacy (i.e., ≥5 drugs per day), use of home-help services, history of falls in previous 6 months and temporal disorientation (i.e., inability to give the month and/or year). The LHS in acute care medical units was prospectively calculated in number of days using the hospital registry. Area under receiver operating characteristic (ROC) curves of prolonged LHS of different combinations of BGA items ranged from 0.50 to 0.57. Cox regression models revealed that combinations defining a high risk of prolonged LHS, identified from ROC curves, were significant risk factors for prolonged LHS (hazard ratio >1.16 with P>0.010). Kaplan-Meier distributions of discharge showed that inpatients classified in high-risk group of prolonged LHS were discharged later than those in low-risk group (P<0.003). Prognostic value for prolonged LHS of all combinations was poor with sensitivity under 77%, a high variation of specificity (from 26.6 to 97.4) and a low likelihood ratio of positive test under 5.6. Combinations of 6-item BGA tool were significant risk factors for prolonged LHS but their prognostic value was poor in the studied sample of older inpatients.

  11. Interactions and Supramolecular Organization of Sulfonated Indigo and Thioindigo Dyes in Layered Hydroxide Hosts.

    PubMed

    Costa, Ana L; Gomes, Ana C; Pereira, Ricardo C; Pillinger, Martyn; Gonçalves, Isabel S; Pineiro, Marta; Seixas de Melo, J Sérgio

    2018-01-09

    Supramolecularly organized host-guest systems have been synthesized by intercalating water-soluble forms of indigo (indigo carmine, IC) and thioindigo (thioindigo-5,5'-disulfonate, TIS) in zinc-aluminum-layered double hydroxides (LDHs) and zinc-layered hydroxide salts (LHSs) by coprecipitation routes. The colors of the isolated powders were dark blue for hybrids containing only IC, purplish blue or dark lilac for cointercalated samples containing both dyes, and ruby/wine for hybrids containing only TIS. The as-synthesized and thermally treated materials were characterized by Fourier transform infrared, Fourier transform Raman, and nuclear magnetic resonance spectroscopies, powder X-ray diffraction, scanning electron microscopy, and elemental and thermogravimetric analyses. The basal spacings found for IC-LDH, TIS-LDH, IC-LHS, and TIS-LHS materials were 21.9, 21.05, 18.95, and 21.00 Å, respectively, with intermediate spacings being observed for the cointercalated samples that either decreased (LDHs) or increased (LHSs) with increasing TIS content. UV-visible and fluorescence spectroscopies (steady-state and time-resolved) were used to probe the molecular distribution of the immobilized dyes. The presence of aggregates together with the monomer units is suggested for IC-LDH, whereas for TIS-LDH, IC-LHS, and TIS-LHS, the dyes are closer to the isolated situation. Accordingly, while emission from the powder H 2 TIS is strongly quenched, an increment in the emission of about 1 order of magnitude was observed for the TIS-LDH/LHS hybrids. Double-exponential fluorescence decays were obtained and associated with two monomer species interacting differently with cointercalated water molecules. The incorporation of both TIS and IC in the LDH and LHS hosts leads to an almost complete quenching of the fluorescence, pointing to a very efficient energy transfer process from (fluorescent) TIS to (nonfluorescent) IC.

  12. Machine learning for predicting soil classes in three semi-arid landscapes

    USGS Publications Warehouse

    Brungard, Colby W.; Boettinger, Janis L.; Duniway, Michael C.; Wills, Skye A.; Edwards, Thomas C.

    2015-01-01

    Mapping the spatial distribution of soil taxonomic classes is important for informing soil use and management decisions. Digital soil mapping (DSM) can quantitatively predict the spatial distribution of soil taxonomic classes. Key components of DSM are the method and the set of environmental covariates used to predict soil classes. Machine learning is a general term for a broad set of statistical modeling techniques. Many different machine learning models have been applied in the literature and there are different approaches for selecting covariates for DSM. However, there is little guidance as to which, if any, machine learning model and covariate set might be optimal for predicting soil classes across different landscapes. Our objective was to compare multiple machine learning models and covariate sets for predicting soil taxonomic classes at three geographically distinct areas in the semi-arid western United States of America (southern New Mexico, southwestern Utah, and northeastern Wyoming). All three areas were the focus of digital soil mapping studies. Sampling sites at each study area were selected using conditioned Latin hypercube sampling (cLHS). We compared models that had been used in other DSM studies, including clustering algorithms, discriminant analysis, multinomial logistic regression, neural networks, tree based methods, and support vector machine classifiers. Tested machine learning models were divided into three groups based on model complexity: simple, moderate, and complex. We also compared environmental covariates derived from digital elevation models and Landsat imagery that were divided into three different sets: 1) covariates selected a priori by soil scientists familiar with each area and used as input into cLHS, 2) the covariates in set 1 plus 113 additional covariates, and 3) covariates selected using recursive feature elimination. Overall, complex models were consistently more accurate than simple or moderately complex models. Random forests (RF) using covariates selected via recursive feature elimination was consistently the most accurate, or was among the most accurate, classifiers between study areas and between covariate sets within each study area. We recommend that for soil taxonomic class prediction, complex models and covariates selected by recursive feature elimination be used. Overall classification accuracy in each study area was largely dependent upon the number of soil taxonomic classes and the frequency distribution of pedon observations between taxonomic classes. Individual subgroup class accuracy was generally dependent upon the number of soil pedon observations in each taxonomic class. The number of soil classes is related to the inherent variability of a given area. The imbalance of soil pedon observations between classes is likely related to cLHS. Imbalanced frequency distributions of soil pedon observations between classes must be addressed to improve model accuracy. Solutions include increasing the number of soil pedon observations in classes with few observations or decreasing the number of classes. Spatial predictions using the most accurate models generally agree with expected soil–landscape relationships. Spatial prediction uncertainty was lowest in areas of relatively low relief for each study area.

  13. Development and Implementation of a Formal Framework for Bottom-up Uncertainty Analysis of Input Emissions: Case Study of Residential Wood Combustion

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Mashayekhi, R.; Saeednooran, S.; Hakami, A.; Ménard, R.; Moran, M. D.; Zhang, J.

    2016-12-01

    We have developed a formal framework for documentation, quantification, and propagation of uncertainties in upstream emissions inventory data at various stages leading to the generation of model-ready gridded emissions through emissions processing software such as the EPA's SMOKE (Sparse Matrix Operator Kernel Emissions) system. To illustrate this framework we present a proof-of-concept case study of a bottom-up quantitative assessment of uncertainties in emissions from residential wood combustion (RWC) in the U.S. and Canada. Uncertainties associated with key inventory parameters are characterized based on existing information sources, including the American Housing Survey (AHS) from the U.S. Census Bureau, Timber Products Output (TPO) surveys from the U.S. Forest Service, TNS Canadian Facts surveys, and the AP-42 emission factor document from the U.S. EPA. The propagation of uncertainties is based on Monte Carlo simulation code external to SMOKE. Latin Hypercube Sampling (LHS) is implemented to generate a set of random realizations of each RWC inventory parameter, for which the uncertainties are assumed to be normally distributed. Random realizations are also obtained for each RWC temporal and chemical speciation profile and spatial surrogate field external to SMOKE using the LHS approach. SMOKE outputs for primary emissions (e.g., CO, VOC) using both RWC emission inventory realizations and perturbed temporal and chemical profiles and spatial surrogates show relative uncertainties of about 30-50% across the U.S. and about 70-100% across Canada. Positive skewness values (up to 2.7) and variable kurtosis values (up to 4.8) were also found. Spatial allocation contributes significantly to the overall uncertainty, particularly in Canada. By applying this framework we are able to produce random realizations of model-ready gridded emissions that along with available meteorological ensembles can be used to propagate uncertainties through chemical transport models. The approach described here provides an effective means for formal quantification of uncertainties in estimated emissions from various source sectors and for continuous documentation, assessment, and reduction of emission uncertainties.

  14. A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions

    NASA Astrophysics Data System (ADS)

    Lienert, Sebastian; Joos, Fortunat

    2018-05-01

    A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.

  15. From Environment to Mating Competition and Super-K in a Predominantly Urban Sample of Young Adults.

    PubMed

    Richardson, George B; Dariotis, Jacinda K; Lai, Mark H C

    2017-01-01

    Recent research suggests human life history strategy (LHS) may be subsumed by multiple dimensions, including mating competition and Super-K, rather than one. In this study, we test whether a two-dimensional structure best fit data from a predominantly urban sample of young adults ages 18-24. We also test whether latent life history dimensions are associated with environmental harshness and unpredictability as predicted by life history theory. Results provide evidence that a two-dimensional model best fit the data. Furthermore, a moderate inverse residual correlation between mating competition and Super-K was found, consistent with a life history trade-off. Our findings suggest that parental socioeconomic status may enhance investment in mating competition, that harshness might persist into young adulthood as an important correlate of LHS, and that unpredictability may not have significant effects in young adulthood. These findings further support the contention that human LHS is multidimensional and environmental effects on LHS are more complex than previously suggested. The model presented provides a parsimonious explanation of an array of human behaviors and traits and can be used to inform public health initiatives, particularly with respect to the potential impact of environmental interventions.

  16. Simulating maize yield and bomass with spatial variability of soil field capacity

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Trout, Thomas; Nolan, Bernard T.; Malone, Robert W.

    2015-01-01

    Spatial variability in field soil properties is a challenge for system modelers who use single representative values, such as means, for model inputs, rather than their distributions. In this study, the root zone water quality model (RZWQM2) was first calibrated for 4 yr of maize (Zea mays L.) data at six irrigation levels in northern Colorado and then used to study spatial variability of soil field capacity (FC) estimated in 96 plots on maize yield and biomass. The best results were obtained when the crop parameters were fitted along with FCs, with a root mean squared error (RMSE) of 354 kg ha–1 for yield and 1202 kg ha–1 for biomass. When running the model using each of the 96 sets of field-estimated FC values, instead of calibrating FCs, the average simulated yield and biomass from the 96 runs were close to measured values with a RMSE of 376 kg ha–1 for yield and 1504 kg ha–1 for biomass. When an average of the 96 FC values for each soil layer was used, simulated yield and biomass were also acceptable with a RMSE of 438 kg ha–1 for yield and 1627 kg ha–1 for biomass. Therefore, when there are large numbers of FC measurements, an average value might be sufficient for model inputs. However, when the ranges of FC measurements were known for each soil layer, a sampled distribution of FCs using the Latin hypercube sampling (LHS) might be used for model inputs.

  17. Screening for Older Emergency Department Inpatients at Risk of Prolonged Hospital Stay: The Brief Geriatric Assessment Tool

    PubMed Central

    Launay, Cyrille P.; de Decker, Laure; Kabeshova, Anastasiia; Annweiler, Cédric; Beauchet, Olivier

    2014-01-01

    Background The aims of this study were 1) to confirm that combinations of brief geriatric assessment (BGA) items were significant risk factors for prolonged LHS among geriatric patients hospitalized in acute care medical units after their admission to the emergency department (ED); and 2) to determine whether these combinations of BGA items could be used as a prognostic tool of prolonged LHS. Methods Based on a prospective observational cohort design, 1254 inpatients (mean age ± standard deviation, 84.9±5.9 years; 59.3% female) recruited upon their admission to ED and discharged in acute care medical units of Angers University Hospital, France, were selected in this study. At baseline assessment, a BGA was performed and included the following 6 items: age ≥85years, male gender, polypharmacy (i.e., ≥5 drugs per day), use of home-help services, history of falls in previous 6 months and temporal disorientation (i.e., inability to give the month and/or year). The LHS in acute care medical units was prospectively calculated in number of days using the hospital registry. Results Area under receiver operating characteristic (ROC) curves of prolonged LHS of different combinations of BGA items ranged from 0.50 to 0.57. Cox regression models revealed that combinations defining a high risk of prolonged LHS, identified from ROC curves, were significant risk factors for prolonged LHS (hazard ratio >1.16 with P>0.010). Kaplan-Meier distributions of discharge showed that inpatients classified in high-risk group of prolonged LHS were discharged later than those in low-risk group (P<0.003). Prognostic value for prolonged LHS of all combinations was poor with sensitivity under 77%, a high variation of specificity (from 26.6 to 97.4) and a low likelihood ratio of positive test under 5.6. Conclusion Combinations of 6-item BGA tool were significant risk factors for prolonged LHS but their prognostic value was poor in the studied sample of older inpatients. PMID:25333271

  18. The London handicap scale: a re-evaluation of its validity using standard scoring and simple summation.

    PubMed

    Jenkinson, C; Mant, J; Carter, J; Wade, D; Winner, S

    2000-03-01

    To assess the validity of the London handicap scale (LHS) using a simple unweighted scoring system compared with traditional weighted scoring 323 patients admitted to hospital with acute stroke were followed up by interview 6 months after their stroke as part of a trial looking at the impact of a family support organiser. Outcome measures included the six item LHS, the Dartmouth COOP charts, the Frenchay activities index, the Barthel index, and the hospital anxiety and depression scale. Patients' handicap score was calculated both using the standard procedure (with weighting) for the LHS, and using a simple summation procedure without weighting (U-LHS). Construct validity of both LHS and U-LHS was assessed by testing their correlations with the other outcome measures. Cronbach's alpha for the LHS was 0.83. The U-LHS was highly correlated with the LHS (r=0.98). Correlation of U-LHS with the other outcome measures gave very similar results to correlation of LHS with these measures. Simple summation scoring of the LHS does not lead to any change in the measurement properties of the instrument compared with standard weighted scoring. Unweighted scores are easier to calculate and interpret, so it is recommended that these are used.

  19. Dermoscopic findings in Laugier-Hunziker syndrome.

    PubMed

    Gencoglan, Gulsum; Gerceker-Turk, Bengu; Kilinc-Karaarslan, Isil; Akalin, Taner; Ozdemir, Fezal

    2007-05-01

    Laugier-Hunziker syndrome (LHS) is a rare, acquired mucocutaneous hyperpigmentation often associated with longitudinal melanonychia. The clinical behavior of mucocutaneous pigmented lesions ranges from benign to highly malignant. Therefore, in most cases, the clinical diagnosis should be confirmed by further diagnostic methods. Dermoscopy is a noninvasive technique that has been used to make more accurate diagnoses of pigmented skin lesions. Nevertheless, to our knowledge, the dermoscopic features of the pigmented lesions in LHS have not been described previously. Herein, we report a case of LHS together with its dermoscopic features. The clinical examination revealed macular hyperpigmentation on the oral and genital mucosa, conjunctiva, and palmoplantar region together with longitudinal melanonychia. Dermoscopic examination of mucosal lesions on the patient's lips and vulva revealed a parallel pattern. Longitudinal homogeneous pigmentation was observed on the toenails. The pigmented macules on the palms and the sole showed a parallel furrow pattern. A skin biopsy sample taken from the labial lesion was compatible with a diagnosis of mucosal melanosis. By means of this case report, the dermoscopic features of the pigmented lesions in LHS are described for the first time, which facilitates diagnosis with a noninvasive technique. Future reports highlighting the dermoscopic features of this syndrome may simplify the diagnosis of LHS, which is thought to be underdiagnosed.

  20. Evidence for distinct roles of the SEPALLATA gene LEAFY HULL STERILE1 in Eleusine indica and Megathyrsus maximus (Poaceae).

    PubMed

    Reinheimer, Renata; Malcomber, Simon T; Kellogg, Elizabeth A

    2006-01-01

    LEAFY HULL STERILE1 (LHS1) is an MIKC-type MADS-box gene in the SEPALLATA class. Expression patterns of LHS1 homologs vary among species of grasses, and may be involved in determining palea and lemma morphology, specifying the terminal floret of the spikelet, and sex determination. Here we present LHS1 expression data from Eleusine indica (subfamily Chloridoideae) and Megathyrsus maximus (subfamily Panicoideae) to provide further insights into the hypothesized roles of the gene. E. indica has spikelets with three to eight florets that mature acropetally; E. indica LHS1 (EiLHS1) is expressed in the palea and lemma of all florets. In contrast, M. maximus has spikelets with two florets that mature basipetally; M. maximus LHS1 (MmLHS1) is expressed in the palea and lemma of the distal floret only. These data are consistent with the hypothesis that LHS1 plays a role in determining palea and lemma morphology and specifies the terminal floret of basipetally maturing grass spikelets. However, LHS1 expression does not correlate with floret sex expression; MmLHS1 is restricted to the bisexual distal floret, whereas EiLHS1 is expressed in both sterile and bisexual floret meristems. Phylogenetic analyses reconstruct a complex pattern of LHS1 expression evolution in grasses. LHS1 expression within the gynoecium has apparently been lost twice, once before diversification of a major clade within tribe Paniceae, and once in subfamily Chloridoideae. These data suggest that LHS1 has multiple roles during spikelet development and may have played a role in the diversification of spikelet morphology.

  1. The simulation of a two-dimensional (2D) transport problem in a rectangular region with Lattice Boltzmann method with two-relaxation-time

    NASA Astrophysics Data System (ADS)

    Sugiyanto, S.; Hardyanto, W.; Marwoto, P.

    2018-03-01

    Transport phenomena are found in many problems in many engineering and industrial sectors. We analyzed a Lattice Boltzmann method with Two-Relaxation Time (LTRT) collision operators for simulation of pollutant moving through the medium as a two-dimensional (2D) transport problem in a rectangular region model. This model consists of a 2D rectangular region with 54 length (x), 27 width (y), and it has isotropic homogeneous medium. Initially, the concentration is zero and is distributed evenly throughout the region of interest. A concentration of 1 is maintained at 9 < y < 18, whereas the concentration of zero is maintained at 0 < y < 9 and 18 < y < 27. A specific discharge (Darcy velocity) of 1.006 is assumed. A diffusion coefficient of 0.8333 is distributed uniformly with a uniform porosity of 0.35. A computer program is written in MATLAB to compute the concentration of pollutant at any specified place and time. The program shows that LTRT solution with quadratic equilibrium distribution functions (EDFs) and relaxation time τa=1.0 are in good agreement result with other numerical solutions methods such as 3DLEWASTE (Hybrid Three-dimensional Lagrangian-Eulerian Finite Element Model of Waste Transport Through Saturated-Unsaturated Media) obtained by Yeh and 3DFEMWATER-LHS (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media with Latin Hypercube Sampling) obtained by Hardyanto.

  2. iCFD: Interpreted Computational Fluid Dynamics - Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design - The secondary clarifier.

    PubMed

    Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy

    2015-10-15

    The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy - taking a reasonable computational effort - when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Life history strategy and young adult substance use.

    PubMed

    Richardson, George B; Chen, Ching-Chen; Dai, Chia-Liang; Swoboda, Christopher M

    2014-11-03

    This study tested whether life history strategy (LHS) and its intergenerational transmission could explain young adult use of common psychoactive substances. We tested a sequential structural equation model using data from the National Longitudinal Survey of Youth. During young adulthood, fast LHS explained 61% of the variance in overall liability for substance use. Faster parent LHS predicted poorer health and lesser alcohol use, greater neuroticism and cigarette smoking, but did not predict fast LHS or overall liability for substance use among young adults. Young adult neuroticism was independent of substance use controlling for fast LHS. The surprising finding of independence between parent and child LHS casts some uncertainty upon the identity of the parent and child LHS variables. Fast LHS may be the primary driver of young adult use of common psychoactive substances. However, it is possible that the young adult fast LHS variable is better defined as young adult mating competition. We discuss our findings in depth, chart out some intriguing new directions for life history research that may clarify the dimensionality of LHS and its mediation of the intergenerational transmission of substance use, and discuss implications for substance abuse prevention and treatment.

  4. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  5. Effect of day of the week of primary total hip arthroplasty on length of stay at a university-based teaching medical center.

    PubMed

    Rathi, Pranav; Coleman, Sheldon; Durbin-Johnson, Blythe; Giordani, Mauro; Pereira, Gavin; Di Cesare, Paul E

    2014-12-01

    Length of hospital stay (LHS) after primary total hip arthroplasty (THA) constitutes a critical outcome measure, as prolonged LHS implies increased resource expenditure. Investigations have highlighted factors that affect LHS after THA. These factors include advanced age, medical comorbidities, obesity, intraoperative time, anesthesia technique, surgical site infection, and incision length. We retrospectively analyzed the effect of day of the week of primary THA on LHS. We reviewed the surgery and patient factors of 273 consecutive patients who underwent THA at our institution, a tertiary-care teaching hospital. There was a 15% increase in LHS for patients who underwent THA on Thursday versus Monday when controlling for other covariates that can affect LHS. Other statistically significant variables associated with increased LHS included American Society of Anesthesiologists grade, transfusion requirements, and postoperative complications. The day of the week of THA may be an independent variable affecting LHS. Institutions with reduced weekend resources may want to perform THA earlier in the week to try to reduce LHS.

  6. Effect of Day of the Week of Primary Total Hip Arthroplasty on Length of Stay at a University-Based Teaching Medical Center

    PubMed Central

    Rathi, Pranav; Coleman, Sheldon; Durbin-Johnson, Blythe; Giordani, Mauro; Pereira, Gavin; Di Cesare, Paul E.

    2016-01-01

    Length of hospital stay (LHS) after primary total hip arthroplasty (THA) constitutes a critical outcome measure, as prolonged LHS implies increased resource expenditure. Investigations have highlighted factors that affect LHS after THA. These factors include advanced age, medical comorbidities, obesity, intraoperative time, anesthesia technique, surgical site infection, and incision length. We retrospectively analyzed the effect of day of the week of primary THA on LHS. We reviewed the surgery and patient factors of 273 consecutive patients who underwent THA at our institution, a tertiary-care teaching hospital. There was a 15% increase in LHS for patients who underwent THA on Thursday versus Monday when controlling for other covariates that can affect LHS. Other statistically significant variables associated with increased LHS included American Society of Anesthesiologists grade, transfusion requirements, and post-operative complications. The day of the week of THA may be an independent variable affecting LHS. Institutions with reduced weekend resources may want to perform THA earlier in the week to try to reduce LHS. PMID:25490016

  7. Prognostic significance of pulmonary lymphangioleiomyomatosis histologic score.

    PubMed

    Matsui, K; Beasley, M B; Nelson, W K; Barnes, P M; Bechtle, J; Falk, R; Ferrans, V J; Moss, J; Travis, W D

    2001-04-01

    Correlations were made between clinical and follow-up data and histopathologic findings in 105 women (mean age +/- standard deviation, 38.3 +/- 9.0 years) with pulmonary lymphangioleiomyomatosis (LAM). The actuarial survival (to pulmonary transplantation or death) of the patients from the time of lung biopsy was 85.1% and 71.0% after 5 and 10 years respectively. The histologic severity of LAM, graded as a LAM histologic score (LHS), was determined on the basis of semiquantitative estimation of the percentage of tissue involvement by the two major features of LAM: the cystic lesions and the infiltration by abnormal smooth muscle cells (LAM cells) in each case: LHS-1, <25%; LHS-2, 25% to 50%; and LHS-3, >50%. Analysis using the Kaplan-Meier method revealed significant differences in survival for patients with LHS-1, -2, and -3 (p = 0.01). The 5-and 10-year survivals were 100% and 100% for LHS-1, 81.2% and 74.4% for LHS-2, and 62.8% and 52.4% for LHS-3. Increased degrees of accumulation of hemosiderin in macrophages also were associated with higher LHS scores (p = 0.029) and a worse prognosis (p = 0.0012). Thus, the current study suggests that the LHS may provide a basis for determining the prognosis of LAM.

  8. The impact of stress on the life history strategies of African American adolescents: cognitions, genetic moderation, and the role of discrimination.

    PubMed

    Gibbons, Frederick X; Roberts, Megan E; Gerrard, Meg; Li, Zhigang; Beach, Steven R H; Simons, Ronald L; Weng, Chih-Yuan; Philibert, Robert A

    2012-05-01

    The impact of 3 different sources of stress--environmental, familial (e.g., low parental investment), and interpersonal (i.e., racial discrimination)--on the life history strategies (LHS) and associated cognitions of African American adolescents were examined over an 11-year period (5 waves, from age 10.5 to 21.5). Analyses indicated that each one of the sources of stress was associated with faster LHS cognitions (e.g., tolerance of deviance, willingness to engage in risky sex), which, in turn, predicted faster LHS behaviors (e.g., frequent sexual behavior). LHS, then, negatively predicted outcome (resilience) at age 21.5 (i.e., faster LHS → less resilience). In addition, presence of the risk ("sensitivity") alleles of 2 monoamine-regulating genes, the serotonin transporter gene (5HTTLPR) and the dopamine D4 receptor gene (DRD4), moderated the impact of perceived racial discrimination on LHS cognitions: Participants with more risk alleles (higher "sensitivity") reported faster LHS cognitions at age 18 and less resilience at age 21 if they had experienced higher amounts of discrimination and slower LHS and more resilience if they had experienced smaller amounts of discrimination. Implications for LHS theories are discussed.

  9. Life history strategy and the HEXACO personality dimensions.

    PubMed

    Manson, Joseph H

    2015-01-16

    Although several studies have linked Life History Strategy (LHS) variation with variation in the Five Factor Model personality dimensions, no published research has explored the relationship of LHS to the HEXACO personality dimensions. The theoretically expected relationship of the HEXACO Emotionality factor to LHS is unclear. The results of two studies (N = 641) demonstrated that LHS indicators form part of a factor along with HEXACO Extraversion, Agreeableness, Conscientiousness, and (marginally) Honesty-Humility. People higher on these dimensions pursue a slower LHS. Neither Openness nor Emotionality was associated with this factor. Holding LHS constant, social involvement with kin was consistently predicted by higher Emotionality and was not consistently predicted by any other HEXACO factor. These results support a view of Emotionality as part of an LHS-independent personality dimension that influences the provision and receipt of kin altruism.

  10. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  11. The potential association of later initiation of oral/enteral nutrition on euthyroid sick syndrome in burn patients.

    PubMed

    Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; Derosier, Leo C; de la Torre, Jorge I

    2013-01-01

    Objective. The aim of this study was to determine if early initiation of oral/enteral nutrition in burn patients minimizes the drop in fT3 levels, reduces the potential for euthyroid sick syndrome (ESS), and shortens the length of hospital stay (LHS). Subjects and Methods. We retrospectively evaluated the statistical association of serum fT3, fT4, and TSH at the first (2nd-5th day) and second sample collection (9th-12th day) after the burn injury in 152 burn patients. Three groups were established depending on time of initiation of the oral/enteral nutrition: <24 h before the injury (Group 1), 24-48 h after the injury (Group 2), and >48 h after the injury (Group 3). Results. They were expressed as mean ± standard deviation. We found that LHS and the fT3 levels were statistically different in the 3 groups. The LHS (in days) was, respectively, in each group, 16.77 ± 4.56, 21.98 ± 4.86, and 26.06 ± 5.47. Despite the quantifiable drop in fT3, ESS was present only at the first sample collection (2.61 ± 0.92 days) in Group 3, but there was no group with ESS at the second sample collection (9.89 ± 1.01 days). Our data suggest that early initiation of nutritional supplementation decreases the length of hospitalization and is associated with decreasing fT3 serum concentration depression. Conclusion. Early initiation of oral/enteral nutrition counteracts ESS and improves the LHS in burn patients.

  12. The Impact of Stress on the Life History Strategies of African American Adolescents: Cognitions, Genetic Moderation, and the Role of Discrimination

    PubMed Central

    Gibbons, Frederick X.; Roberts, Megan E.; Gerrard, Meg; Li, Zhigang; Beach, Steven R. H.; Simons, Ronald L.; Weng, Chih-Yuan; Philibert, Robert A.

    2013-01-01

    The impact of three different sources of stress—environmental, familial (e.g., low parental investment), and interpersonal (i.e., racial discrimination)—on the life history strategies (LHS) and associated cognitions of African American adolescents were examined over an 11-year period (five waves, from age 10.5 to 21.5). Analyses indicated that each one of the sources of stress was associated with faster LHS cognitions (e.g., tolerance of deviance, willingness to engage in risky sex), which, in turn, predicted faster LHS behaviors (e.g., frequent sexual behavior). LHS then negatively predicted outcome (resilience) at age 21.5; i.e., faster LHS → less resilience. In addition, presence of the risk (“sensitivity”) alleles of two monoamine-regulating genes, the serotonin transporter gene (5HTTLPR) and the dopamine D4 receptor gene (DRD4) moderated the impact of perceived racial discrimination on LHS cognitions: Participants with more risk alleles (higher “sensitivity”) reported: faster LHS cognitions at age 18 and less resilience at age 21, if they had experienced higher amounts of discrimination ; and slower LHS and more resilience if they had experienced smaller amounts of discrimination. Implications for LHS theories are discussed. PMID:22251000

  13. Is a day hospital rehabilitation programme associated with reduction of handicap in stroke patients?

    PubMed

    Hershkovitz, Avital; Beloosesky, Yichayaou; Brill, Shai; Gottlieb, Daniel

    2004-05-01

    (1) To assess whether a rehabilitation day hospital programme is associated with a reduced handicap level of stroke patients. (2) To estimate the relationship between the London Handicap Scale (LHS) and other outcome measures. (3) To examine the effect of demographic parameters (age, gender, family status, education) on LHS scores. A prospective longitudinal survey. An urban geriatric rehabilitation day hospital. Two hundred and seven elderly stroke patients admitted between December 1999 and February 2001. London Handicap Scale (LHS), Functional Independent Measure (FIM), Nottingham Extended ADL Index, timed get up and go test. LHS scores at discharge changed significantly (p < 0.008) for mobility, physical independence and occupation. The overall change in LHS score was 2.3 points (20%); effect size 0.43. A significant relationship was found between discharge score of LHS and admission score of FIM, Nottingham Index, timed get up and go and age. Multiple linear regressions did not identify a good predictor for the discharge score of LHS. Higher education was associated with higher LHS scores on admission (p = 0.016) but with less success in correcting handicap (p = 0.046). A day hospital programme is associated with reduced level of handicap in stroke patients. The LHS is a useful and simple scale for measuring change in these patients. LHS in stroke patients correlates with other outcome measures, yet they cannot be used interchangeably. A significant relationship between education and level of handicap exists.

  14. In vitro anticancer activities of Leonurus heterophyllus sweet (Chinese motherwort herb).

    PubMed

    Chinwala, Maimoona G; Gao, Min; Dai, Jie; Shao, Jun

    2003-08-01

    To investigate the anticancer activities of Chinese motherwort herb (Leonurus heterophyllus Sweet; LHS). Dried LHS was extracted and reconstituted in phosphate-buffered saline. The in vitro antiproliferation activities of the extract were tested against seven human cancer cell lines. The DNA ladder assay and cell morphologic studies were performed to verify the drug's apoptotic activities. The possible pathway by which LHS induced apoptosis was also explored by examining mitochondrial depolarization, cytochrome c release, and caspase-3 activation. The LHS extract was effective in inhibiting the growth of all seven cancer cell lines tested. The IC(50) (50% inhibition concentrations, milligrams of raw material per milliliter) were in the range of 8.0-40.0 when the drug exposure time was 48 hours. The inhibitory action of the herbal extract was time- and dose-dependent. A significant decrease in activity was seen when the drug exposure time was shortened. Microscopic examination of the LN CaP and other cancer cell lines after treatment with LHS revealed morphologic changes that are typical of cells undergoing apoptosis. DNA fragmentation was obvious in the DNA latter assay and this confirmed the induction of apoptosis of the cancer cells by LHS. The mitochondria of the LHS-treated cells were found to undergo depolarization. Cytochrome c was released into the cytosol from the LHS-treated cells but not from the control cells. Cells treated with LHS showed cleavage of the full-length poly[ADP(ribose)] polymerase (PARP; 112 kd) to generate the 85-kd cleaved PARP fragment indicating the activation of caspase-3. LHS was able to induce apoptosis of all the tumor cell lines tested. The antiproliferation effect was dose- and time-dependent. The mitochondrion was found to be involved in the apoptosis induced by the LHS extract.

  15. The UK Functional Assessment Measure (UK FIM+FAM): Psychometric Evaluation in Patients Undergoing Specialist Rehabilitation following a Stroke from the National UK Clinical Dataset.

    PubMed

    Nayar, Meenakshi; Vanderstay, Roxana; Siegert, Richard J; Turner-Stokes, Lynne

    2016-01-01

    The UK Functional Assessment Measure (UKFIM+FAM) is the principal outcome measure for the UK Rehabilitation Outcomes Collaborative (UKROC) national database for specialist rehabilitation. Previously validated in a mixed neurorehabilitation cohort, this study is the first to explore its psychometric properties in a stroke population, and compare left and right hemispheric strokes (LHS vs RHS). We analysed in-patient episode data from 62 specialist rehabilitation units collated through the UKROC database 2010-2013. Complete data were analysed for 1,539 stroke patients (LHS: 588, RHS: 566 with clear localisation). For factor analysis, admission and discharge data were pooled and randomised into two equivalent samples; the first for exploratory factor analysis (EFA) using principal components analysis, and the second for confirmatory factor analysis (CFA). Responsiveness for each subject (change from admission to discharge) was examined using paired t-tests and differences between LHS and RHS for the entire group were examined using non-paired t-tests. EFA showed a strong general factor accounting for >48% of the total variance. A three-factor solution comprising motor, communication and psychosocial subscales, accounting for >69% total variance, provided acceptable fit statistics on CFA (Root Mean Square Error of Approximation was 0.08 and Comparative Fit Index/ Tucker Lewis Index 0.922/0.907). All three subscales showed significant improvement between admission and discharge (p<0.001) with moderate effect sizes (>0.5). Total scores between LHS and RHS were not significantly different. However, LHS showed significantly higher motor scores (Mean 5.7, 95%CI 2.7, 8.6 p<0.001), while LHS had significantly lower cognitive scores, primarily in the communication domain (-6.8 95%CI -7.7, -5.8 p<0.001). To conclude, the UK FIM+FAM has a three-factor structure in stroke, similar to the general neurorehabilitation population. It is responsive to change during in-patient rehabilitation, and distinguishes between LHS and RHS. This tool extends stroke outcome measurement beyond physical disability to include cognitive, communication and psychosocial function.

  16. A Novel Latin Hypercube Algorithm via Translational Propagation

    PubMed Central

    Pan, Guang; Ye, Pengcheng

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844

  17. Transitioning from learning healthcare systems to learning health care communities.

    PubMed

    Mullins, C Daniel; Wingate, La'Marcus T; Edwards, Hillary A; Tofade, Toyin; Wutoh, Anthony

    2018-02-26

    The learning healthcare system (LHS) model framework has three core, foundational components. These include an infrastructure for health-related data capture, care improvement targets and a supportive policy environment. Despite progress in advancing and implementing LHS approaches, low levels of participation from patients and the public have hampered the transformational potential of the LHS model. An enhanced vision of a community-engaged LHS redesign would focus on the provision of health care from the patient and community perspective to complement the healthcare system as the entity that provides the environment for care. Addressing the LHS framework implementation challenges and utilizing community levers are requisite components of a learning health care community model, version two of the LHS archetype.

  18. An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.

    PubMed

    Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang

    2017-11-01

    This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.

  19. Analysis of the influence of handset phone position on RF exposure of brain tissue.

    PubMed

    Ghanmi, Amal; Varsier, Nadège; Hadjem, Abdelhamid; Conil, Emmanuelle; Picon, Odile; Wiart, Joe

    2014-12-01

    Exposure to mobile phone radio frequency (RF) electromagnetic fields depends on many different parameters. For epidemiological studies investigating the risk of brain cancer linked to RF exposure from mobile phones, it is of great interest to characterize brain tissue exposure and to know which parameters this exposure is sensitive to. One such parameter is the position of the phone during communication. In this article, we analyze the influence of the phone position on the brain exposure by comparing the specific absorption rate (SAR) induced in the head by two different mobile phone models operating in Global System for Mobile Communications (GSM) frequency bands. To achieve this objective, 80 different phone positions were chosen using an experiment based on the Latin hypercube sampling (LHS) to select a representative set of positions. The averaged SAR over 10 g (SAR10 g) in the head, the averaged SAR over 1 g (SAR1 g ) in the brain, and the averaged SAR in different anatomical brain structures were estimated at 900 and 1800 MHz for the 80 positions. The results illustrate that SAR distributions inside the brain area are sensitive to the position of the mobile phone relative to the head. The results also show that for 5-10% of the studied positions the SAR10 g in the head and the SAR1 g in the brain can be 20% higher than the SAR estimated for the standard cheek position and that the Specific Anthropomorphic Mannequin (SAM) model is conservative for 95% of all the studied positions. © 2014 Wiley Periodicals, Inc.

  20. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    NASA Astrophysics Data System (ADS)

    Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-10-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.

  1. Automation of the anthrone assay for carbohydrate concentration determinations.

    PubMed

    Turula, Vincent E; Gore, Thomas; Singh, Suddham; Arumugham, Rasappa G

    2010-03-01

    Reported is the adaptation of a manual polysaccharide assay applicable for glycoconjugate vaccines such as Prevenar to an automated liquid handling system (LHS) for improved performance. The anthrone assay is used for carbohydrate concentration determinations and was scaled to the microtiter plate format with appropriate mixing, dispensing, and measuring operations. Adaptation and development of the LHS platform was performed with both dextran polysaccharides of various sizes and pneumococcal serotype 6A polysaccharide (PnPs 6A). A standard plate configuration was programmed such that the LHS diluted both calibration standards and a test sample multiple times with six replicate preparations per dilution. This extent of replication minimized the effect of any single deviation or delivery error that might have occurred. Analysis of the dextran polymers ranging in size from 214 kDa to 3.755 MDa showed that regardless of polymer chain length the hydrolysis was complete, as evident by uniform concentration measurements. No plate positional absorbance bias was observed; of 12 plates analyzed to examine positional bias the largest deviation observed was 0.02% percent relative standard deviation (%RSD). The high purity dextran also afforded the opportunity to assess LHS accuracy; nine replicate analyses of dextran yielded a mean accuracy of 101% recovery. As for precision, a total of 22 unique analyses were performed on a single lot of PnPs 6A, and the resulting variability was 2.5% RSD. This work demonstrated the capability of a LHS to perform the anthrone assay consistently and a reduced assay cycle time for greater laboratory capacity.

  2. Persistence of physical activity in middle age: a nonlinear dynamic panel approach.

    PubMed

    Kumagai, Narimasa; Ogura, Seiritsu

    2014-09-01

    No prior investigation has considered the effects of state dependence and unobserved heterogeneity on the relationship between regular physical activity (RPA) and latent health stock (LHS). Accounting for state dependence corrects the possible overestimation of the impact of socioeconomic factors. We estimated the degree of the state dependence of RPA and LHS among middle-aged Japanese workers. The 5 years' longitudinal data used in this study were taken from the Longitudinal Survey of Middle and Elderly Persons. Individual heterogeneity was found for both RPA and LHS, and the dynamic random-effects probit model provided the best specification. A smoking habit, low educational attainment, longer work hours, and longer commuting time had negative effects on RPA participation. RPA had positive effects on LHS, taking into consideration the possibility of confounding with other lifestyle variables. The degree of state dependence of LHS was positive and significant. Increasing the intensity of RPA had positive effects on LHS and caused individuals with RPA to exhibit greater persistence of LHS compared to individuals without RPA. This result implies that policy interventions that promote RPA, such as smoking cessation, have lasting consequences. We concluded that smoking cessation is an important health policy to increase both the participation in RPA and LHS.

  3. Derivation and evaluation of a labeled hedonic scale.

    PubMed

    Lim, Juyun; Wood, Alison; Green, Barry G

    2009-11-01

    The objective of this study was to develop a semantically labeled hedonic scale (LHS) that would yield ratio-level data on the magnitude of liking/disliking of sensation equivalent to that produced by magnitude estimation (ME). The LHS was constructed by having 49 subjects who were trained in ME rate the semantic magnitudes of 10 common hedonic descriptors within a broad context of imagined hedonic experiences that included tastes and flavors. The resulting bipolar scale is statistically symmetrical around neutral and has a unique semantic structure. The LHS was evaluated quantitatively by comparing it with ME and the 9-point hedonic scale. The LHS yielded nearly identical ratings to those obtained using ME, which implies that its semantic labels are valid and that it produces ratio-level data equivalent to ME. Analyses of variance conducted on the hedonic ratings from the LHS and the 9-point scale gave similar results, but the LHS showed much greater resistance to ceiling effects and yielded normally distributed data, whereas the 9-point scale did not. These results indicate that the LHS has significant semantic, quantitative, and statistical advantages over the 9-point hedonic scale.

  4. Microfluidics on liquid handling stations (μF-on-LHS): an industry compatible chip interface between microfluidics and automated liquid handling stations.

    PubMed

    Waldbaur, Ansgar; Kittelmann, Jörg; Radtke, Carsten P; Hubbuch, Jürgen; Rapp, Bastian E

    2013-06-21

    We describe a generic microfluidic interface design that allows the connection of microfluidic chips to established industrial liquid handling stations (LHS). A molding tool has been designed that allows fabrication of low-cost disposable polydimethylsiloxane (PDMS) chips with interfaces that provide convenient and reversible connection of the microfluidic chip to industrial LHS. The concept allows complete freedom of design for the microfluidic chip itself. In this setup all peripheral fluidic components (such as valves and pumps) usually required for microfluidic experiments are provided by the LHS. Experiments (including readout) can be carried out fully automated using the hardware and software provided by LHS manufacturer. Our approach uses a chip interface that is compatible with widely used and industrially established LHS which is a significant advancement towards near-industrial experimental design in microfluidics and will greatly facilitate the acceptance and translation of microfluidics technology in industry.

  5. Outcomes of laryngohyoid suspension techniques in an ovine model of profound oropharyngeal dysphagia.

    PubMed

    Johnson, Christopher M; Venkatesan, Naren N; Siddiqui, M Tausif; Cates, Daniel J; Kuhn, Maggie A; Postma, Gregory M; Belafsky, Peter C

    2017-12-01

    To evaluate the efficacy of various techniques of laryngohyoid suspension in the elimination of aspiration utilizing a cadaveric ovine model of profound oropharyngeal dysphagia. Animal study. The head and neck of a Dorper cross ewe was placed in the lateral fluoroscopic view. Five conditions were tested: baseline, thyroid cartilage to hyoid approximation (THA), thyroid cartilage to hyoid to mandible (laryngohyoid) suspension (LHS), LHS with cricopharyngeus muscle myotomy (LHS-CPM), and cricopharyngeus muscle myotomy (CPM) alone. Five 20-mL trials of barium sulfate were delivered into the oropharynx under fluoroscopy for each condition. Outcome measures included the penetration aspiration scale (PAS) and the National Institutes of Health (NIH) Swallow Safety Scale (NIH-SSS). Median baseline PAS and NIH-SSS scores were 8 and 6, respectively, indicating severe impairment. THA scores were not improved from baseline. LHS alone reduced the PAS to 1 (P = .025) and NIH-SSS to 2 (P = .025) from baseline. LHS-CPM reduced the PAS to 1 (P = .025) and NIH-SSS to 0 (P = .025) from baseline. CPM alone did not improve scores. LHS-CPM displayed improved NIH-SSS over LHS alone (P = .003). This cadaveric model represents end-stage profound oropharyngeal dysphagia such as what could result from severe neurological insult. CPM alone failed to improve fluoroscopic outcomes in this model. Thyrohyoid approximation also failed to improve outcomes. LHS significantly improved both PAS and NIH-SSS. The addition of CPM to LHS resulted in improvement over suspension alone. NA. Laryngoscope, 127:E422-E427, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  7. [Effect of Leonurus Heterophyllus Sweet on tissue factor transcription and expression in human umbilical vein endothelial cells in vitro].

    PubMed

    Zheng, Lian; Fang, Chi-hua

    2007-06-01

    To investigate the effect of Leonurus Heterophyllus Sweet, (LHS) on tissue factor (TF) transcription and expression induced by thrombin in human umbilical vein endothelial cells (HUVECs). HUVECs were incubated with different concentrations of LHS and the TF mRNA expression was detected by reverse transcript-polymerase chain reaction (RT-PCR). LHS treatment of HUVECs at different concentrations and for different times resulted in significant differences in TF expression (Plt;0.01). The transcription of TF in LHS-treated cells was significantly different from that of the blank control group (Plt;0.01). LHS can decrease the expression of TF and intervene with TF transcription in HUVECs in vitro.

  8. Predictors of attendance and dropout at the Lung Health Study 11-year follow-up.

    PubMed

    Snow, Wanda M; Connett, John E; Sharma, Shweta; Murray, Robert P

    2007-01-01

    Participant attrition and attendance at follow-up were examined in a multicenter, randomized, clinical trial. The Lung Health Study (LHS) enrolled a total of 5887 adults to examine the impact of smoking cessation coupled with the use of an inhaled bronchodilator on chronic obstructive pulmonary disease (COPD). Of the initial LHS 1 volunteers still living at the time of enrolment in LHS 3 (5332), 4457 (84%) attended the LHS 3 clinic visit, a follow-up session to determine current smoking status and lung function. The average period between the beginning of LHS 1 and baseline interview for LHS 3 was 11 years. In univariate analyses, attenders were older, more likely female, more likely to be married, smoked fewer cigarettes per day, and were more likely to have children who smoked at the start of LHS 1 than non-attenders. Attenders were also less likely to experience respiratory symptoms, such as cough, but had decreased baseline lung function compared with non-attenders. Volunteers recruited via mass mailing were more likely to attend the long-term follow-up visit. Those recruited by public site, worksite, or referral methods were less likely to attend. In multivariate models, age, gender, cigarettes smoked per day, married status, and whether participants' children smoked were identified as significant predictors of attendance versus non-attendance at LHS 3 using stepwise logistic regression. Treatment condition (smoking intervention or usual care) was not a significant predictor of attendance at LHS 3. Older females who smoked less heavily were most likely to participate. These findings may be applied to improve participant recruitment and retention in future clinical trials.

  9. Predictors of Attendance and Dropout at the Lung Health Study 11-Year Follow-Up

    PubMed Central

    Snow, Wanda M.; Connett, John E.; Sharma, Shweta; Murray, Robert P.

    2006-01-01

    Participant attrition and attendance at follow-up were examined in a multicenter, randomized, clinical trial. The Lung Health Study (LHS) enrolled a total of 5, 887 adults to examine the impact of smoking cessation coupled with the use of an inhaled bronchodilator on chronic obstructive pulmonary disease (COPD). Of the initial LHS 1 volunteers still living at the time of enrolment in LHS 3 (5,332), 4,457 (84%) attended the LHS 3 clinic visit, a follow-up session to determine current smoking status and lung function. The average period between the beginning of LHS 1 and baseline interview for LHS 3 was 11 years. In univariate analyses, attenders were older, more likely female, more likely to be married, smoked fewer cigarettes per day, and were more likely to have children who smoked at the start of LHS 1 than non-attenders. Attenders were also less likely to experience respiratory symptoms, such as cough, but had decreased baseline lung function compared with non-attenders. Volunteers recruited via mass mailing were more likely to attend the long-term follow-up visit. Those recruited by public site, worksite, or referral methods were less likely to attend. In multivariate models, age, gender, cigarettes smoked per day, married status, and whether participants’ children smoked were identified as significant predictors of attendance versus non-attendance at LHS 3 using stepwise logistic regression. Treatment condition (smoking intervention or usual care) was not a significant predictor of attendance at LHS 3. Older females who smoked less heavily were most likely to participate. These findings may be applied to improve participant recruitment and retention in future clinical trials. PMID:17015043

  10. The Lead Crack Fatigue Lifting Framework

    DTIC Science & Technology

    2010-04-01

    15000 20000 25000 30000 35000 40000 F-WELD Test Hours C ra ck d ep th (m m ) IPP (1) IPP (2) IPP (3) IPP ( 4 ) IPP (5) IPP ( 6 ) IPP (6_A) FASS 226 (2...FASS 226 ( 4 ) FASS 226 (5) FASS 226 (5L) FASS 226 ( 6 ) FASS 226 (6R) SIH_OBD_SPLICE SIH_FS238/239 (1) SIH_FS238/239 (2) SIH_FS238/239 (3) SIH_FS238/239...conservative. 0.001 0.01 0.1 1 10 0 1 2 3 4 5 6 CB9 RHS CB8 LHS CB12 LHS CB12 RHS CB1 LHS FT55 LHS FT55 RHS ST16 LHS ST16 RHS Log average FLEI Virtual test

  11. Laugier-Hunziker syndrome: a case of asymptomatic mucosal and acral hyperpigmentation.

    PubMed

    Cusick, Elizabeth H; Marghoob, Ashfaq A; Braun, Ralph P

    2017-04-01

    Laugier-Hunziker syndrome (LHS) is a rare condition characterized by acquired hyperpigmentation involving the lips, oral mucosa, acral surfaces, nails and perineum. While patients with LHS may manifest pigmentation in all of the aforementioned areas, most present with pigmentation localized to only a few of these anatomical sites. We herein report a patient exhibiting the characteristic pigment distribution pattern associated with LHS. Since LHS is a diagnosis based on exclusion, we discuss the differential diagnosis of mucocutaneous hyperpigmentation. Due to the benign nature of the disease, it is critical to differentiate this disorder from conditions with similar mucocutaneous pigmentary changes with somatic abnormalities that require medical management. We also explore potential mechanisms that may explain the pathogenesis of LHS.

  12. Laugier-Hunziker Syndrome Presenting with Metachronous Melanoacanthomas.

    PubMed

    Zaki, Hattan; Sabharwal, Amarpreet; Kramer, Jill; Aguirre, Alfredo

    2018-02-15

    Laugier-Hunziker syndrome (LHS, also termed idiopathic lenticular mucocutaneous hyperpigmentation) is an unusual condition characterized by progressive pigmentation of the mucous membranes. LHS displays a benign course and is not associated with malignancy. Here we present a case of LHS with a 7-year follow-up. We document metachronous oral melanoacanthomas in this individual. In addition, we found that the oral melanotic macules in this patient waxed and waned in a cyclical manner. To our knowledge, this is the first report of these findings in the context of LHS. Finally, we provide an overview of other conditions that can present with mucosal hyperpigmentation. It is critical to distinguish LHS from other conditions characterized by mucosal pigmentation in order to facilitate optimal patient care.

  13. Nucleotide Binding by Lhs1p Is Essential for Its Nucleotide Exchange Activity and for Function in Vivo*

    PubMed Central

    de Keyzer, Jeanine; Steel, Gregor J.; Hale, Sarah J.; Humphries, Daniel; Stirling, Colin J.

    2009-01-01

    Protein translocation and folding in the endoplasmic reticulum of Saccharomyces cerevisiae involves two distinct Hsp70 chaperones, Lhs1p and Kar2p. Both proteins have the characteristic domain structure of the Hsp70 family consisting of a conserved N-terminal nucleotide binding domain and a C-terminal substrate binding domain. Kar2p is a canonical Hsp70 whose substrate binding activity is regulated by cochaperones that promote either ATP hydrolysis or nucleotide exchange. Lhs1p is a member of the Grp170/Lhs1p subfamily of Hsp70s and was previously shown to function as a nucleotide exchange factor (NEF) for Kar2p. Here we show that in addition to this NEF activity, Lhs1p can function as a holdase that prevents protein aggregation in vitro. Analysis of the nucleotide requirement of these functions demonstrates that nucleotide binding to Lhs1p stimulates the interaction with Kar2p and is essential for NEF activity. In contrast, Lhs1p holdase activity is nucleotide-independent and unaffected by mutations that interfere with ATP binding and NEF activity. In vivo, these mutants show severe protein translocation defects and are unable to support growth despite the presence of a second Kar2p-specific NEF, Sil1p. Thus, Lhs1p-dependent nucleotide exchange activity is vital for ER protein biogenesis in vivo. PMID:19759005

  14. Effects of surface area and inflow on the performance of stormwater best management practices with uncertainty analysis.

    PubMed

    Park, Daeryong; Roesner, Larry A

    2013-09-01

    The performance of stormwater best management practices (BMPs) is affected by BMP geometric and hydrologic factors. The objective of this study was to investigate the effect of BMP surface area and inflow on BMP performance using the k-C* model with uncertainty analysis. Observed total suspended solids (TSS) from detention basins and retention ponds data sets in the International Stormwater BMP Database were used to build and evaluate the model. Detention basins are regarded as dry ponds because they do not always have water, whereas retention ponds have a permanent pool and are considered wet ponds. In this study, Latin hypercube sampling (LHS) was applied to consider uncertainty in both influent event mean concentration (EMC), C(in), and the areal removal constant, k. The latter was estimated from the hydraulic loading rate, q, through use of a power function relationship. Results show that effluent EMC, C(out), decreased as inflow decreased and as BMP surface area increased in both detention basins and retention ponds. However, the change in C(out), depending on inflow and BMP surface area for detention basins, differed from the change in C(out) for retention ponds. Specifically, C(in) was more dominantly associated with the performance of the k-C* model of detention basins than were BMP surface area and inflow. For retention ponds, however, results suggest that BMP surface area and inflow both influenced changes in C(out) as well as C(in). These results suggest that sensitive factors in the performance of the k-C* model are limited to C(in) for detention basins, whereas BMP surface area, inflow, and C(in) are important for retention ponds.

  15. Gasdynamic Inlet Isolation in Rotating Detonation Engine

    DTIC Science & Technology

    2010-12-01

    2D Total Variation Diminishing (TVD): Continuous Riemann Solver Minimum Dissipation: LHS & RHS Activate pressure switch : Supersonic Activate...Total Variation Diminishing (TVD) limiter: Continuous Riemann Solver Minimum Dissipation: LHS & RHS Activate pressure switch : Supersonic Activate...Continuous 94 Riemann Solver Minimum Dissipation: LHS & RHS Activate pressure switch : Supersonic Activate pressure gradient switch: Normal

  16. The Implementation Challenge and the Learning Health System for SCI Initiative.

    PubMed

    Stucki, Gerold; Bickenbach, Jerome

    2017-02-01

    The paper introduces the special issue by linking the International Spinal Cord Injury (InSCI) Community Survey study to the Learning Health System for SCI Initiative (LHS-SCI). The LHS-SCI was designed to respond to the implementation challenge of bringing about policy reform in light of the targeted policy recommendations of World Health Organization's International Perspectives on SCI report as well as the call for action of WHO's Global Disability Action Plan. The paper reviews the components of LHS-SCI relevant to internationally comparable information, a theory of change to guide for action, and the tools for evidence-informed policy. The interplay between persons, their health needs, and the societal response to those needs provides the foundation for the organization of the LHS-SCI Initiative. Moreover, as the other articles in this special issue describe in detail, the rationale, conceptualization, and study design of the InSCI study are also informed by the rationale, and mission, of the LHS for SCI Initiative. The LHS-SCI, and the implementation challenge that motivates it, is responsible for the design of the InSCI study and the overall mission of LHS-SCI to continuously improve the lived experience of people living with SCI around the world through an international evidence- and rights-informed research and policy reform effort.

  17. Effect of acute lateral hemisection of the spinal cord on spinal neurons of postural networks

    PubMed Central

    Zelenin, P. V.; Lyalka, V. F.; Orlovsky, G. N.; Deliagina, T. G.

    2016-01-01

    In quadrupeds, acute lateral hemisection of the spinal cord (LHS) severely impairs postural functions, which recover over time. Postural limb reflexes (PLRs) represent a substantial component of postural corrections in intact animals. The aim of the present study was to characterize the effects of acute LHS on two populations of spinal neurons (F and E) mediating PLRs. For this purpose, in decerebrate rabbits, responses of individual neurons from L5 to stimulation causing PLRs were recorded before and during reversible LHS (caused by temporal cold block of signal transmission in lateral spinal pathways at L1), as well as after acute surgical (Sur) LHS at L1. Results obtained after Sur-LHS were compared to control data obtained in our previous study. We found that acute LHS caused disappearance of PLRs on the affected side. It also changed a proportion of different types of neurons on that side. A significant decrease and increase in the proportion of F- and non-modulated neurons, respectively, was found. LHS caused a significant decrease in most parameters of activity in F-neurons located in the ventral horn on the lesioned side and in E-neurons of the dorsal horn on both sides. These changes were caused by a significant decrease in the efficacy of posture-related sensory input from the ipsilateral limb to F-neurons, and from the contralateral limb to both F- and E-neurons. These distortions in operation of postural networks underlie the impairment of postural control after acute LHS, and represent a starting point for the subsequent recovery of postural functions. PMID:27702647

  18. Local markets and systems: hospital consolidations in metropolitan areas.

    PubMed

    Luke, R D; Ozcan, Y A; Olden, P C

    1995-10-01

    This study examines the formation of local hospital systems (LHSs) in urban markets by the end of 1992. We argue that a primary reason why hospitals join LHSs is to achieve improved positions of market power relative to threatening rivals. The study draws from a unique database of LHSs located in and around metropolitan statistical areas (MSAs). Data were obtained from the 1991 AHA Annual Hospital Survey, updated to the year 1992 using information obtained from multiple sources (telephone contacts of systems, systems lists of hospitals, published changes in ownership, etc.). Other measures were obtained from a variety of sources, principally the 1989 Area Resources File. The study presents cross-sectional analyses of rival threats and other factors bearing on LHS formation. Three characteristics of LHS formation are examined: LHS penetration of urban areas, LHS size, and number of LHS members located just outside the urban boundaries. LHS penetration is analyzed across urban markets, and LHS size and rural partners are examined across the LHSs. Major hypothesized findings are: (1) with the exception of the number of rural partners, all dependent variables are positively associated with the number of hospitals in the markets; the rural partner measure is negatively associated with the number of hospitals; (2) the number of doctors per capita is positively associated with all but the rural penetration measure; and (3) the percentage of the population in HMOs is positively associated with local cluster penetration and negatively associated with rural system partners. Other findings: (1) average income in the markets is negatively associated with all but the rural penetration measure; (2) LHS size and rural partners are both positively associated with nonprofit system ownership; and (3) they are also both negatively associated with the degree to which their multihospital systems are geographically concentrated in a single state. The findings generally support the argument that LHS formation is the product of hospital providers attempting to improve positions of power in their local markets.

  19. Low humidity environmental challenge causes barrier disruption and cornification of the mouse corneal epithelium via a c-jun N-terminal kinase 2 (JNK2) pathway.

    PubMed

    Pelegrino, F S A; Pflugfelder, S C; De Paiva, C S

    2012-01-01

    Patients with tear dysfunction often experience increased irritation symptoms when subjected to drafty and/or low humidity environmental conditions. The purpose of this study was to investigate the effects of low humidity stress (LHS) on corneal barrier function and expression of cornified envelope (CE) precursor proteins in the epithelium of C57BL/6 and c-jun N-terminal kinase 2 (JNK2) knockout (KO) mice. LHS was induced in both strains by exposure to an air draft for 15 (LHS15D) or 30 days (LHS30D) at a relative humidity <30%RH. Nonstressed (NS) mice were used as controls. Oregon-green-dextran uptake was used to measure corneal barrier function. Levels of small proline-rich protein (SPRR)-2, involucrin, occludin, and MMP-9 were evaluated by immunofluorescent staining in cornea sections. Wholemount corneas immunostained for occludin were used to measure mean apical cell area. Gelatinase activity was evaluated by in situ zymography. Expression of MMP, CE and inflammatory cytokine genes was evaluated by qPCR. C57BL/6 mice exposed to LHS15D showed corneal barrier dysfunction, decreased apical corneal epithelial cell area, higher MMP-9 expression and gelatinase activity and increased involucrin and SPRR-2 immunoreactivity in the corneal epithelium compared to NS mice. JNK2KO mice were resistant to LHS-induced corneal barrier disruption. MMP-3,-9,-13, IL-1α, IL-1β, involucrin and SPRR-2a RNA transcripts were significantly increased in C57BL/6 mice at LHS15D, while no change was noted in JNK2KO mice. LHS is capable of altering corneal barrier function, promoting pathologic alteration of the TJ complex and stimulating production of CE proteins by the corneal epithelium. Activation of the JNK2 signaling pathway contributes to corneal epithelial barrier disruption in LHS. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Function and disability in late life: comparison of the Late-Life Function and Disability Instrument to the Short-Form-36 and the London Handicap Scale.

    PubMed

    Dubuc, Nicole; Haley, Stephen; Ni, Pengsheng; Kooyoomjian, Jill; Jette, Alan

    2004-03-18

    We evaluated the Late-Life Function and Disability Instrument's (LLFDI) concurrent validity, comprehensiveness and precision by comparing it with the Short-Form-36 physical functioning (PF-10) and the London Handicap Scale (LHS). We administered the LLFDI, PF-10 and LHS to 75 community-dwelling adults (> 60 years of age). We used Pearson correlation coefficients to examine concurrent validity and Rasch analysis to compare the item hierarchies, content ranges and precision of the PF-10 and LLFDI function domains, and the LHS and the LLFDI disability domains. LLFDI Function (lower extremity scales) and PF-10 scores were highly correlated (r = 0.74 - 0.86, p > 0.001); moderate correlations were found between the LHS and the LLFDI Disability limitation (r = 0.66, p < 0.0001) and Disability frequency (r = 0.47, p < 0.001) scores. The LLFDI had a wider range of content coverage, less ceiling effects and better relative precision across the spectrum of function and disability than the PF-10 and the LHS. The LHS had slightly more content range and precision in the lower end of the disability scale than the LLFDI. The LLFDI is a more comprehensive and precise instrument compared to the PF-10 and LHS for assessing function and disability in community-dwelling older adults.

  1. Laugier-Hunziker syndrome: A case report.

    PubMed

    Wei, Z; Li, G-Y; Ruan, H-H; Zhang, L; Wang, W-M; Wang, X

    2018-04-01

    Laugier-Hunziker syndrome (LHS) is a rare, benign, acquired pigmentary condition mainly affecting lips, oral mucosa and acral area, frequently associated with longitudinal melanonychia. Herein, we reported a 45-year-old female case with LHS. The clinical, dermoscopic, histopathologic features of LHS were reviewed and the important differential diagnosis was discussed. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  2. Simulation of Fault Tolerance in a Hypercube Arrangement of Discrete Processors.

    DTIC Science & Technology

    1987-12-01

    Geometric Properties .................... 22 Binary Properties ....................... 26 Intel Hypercube Hardware Arrangement ... 28 IV. Cube-Connected... Properties of the CCC..............35 CCC Redundancy............................... 38 iii 6L V. Re-Configurable Cube-Connected Cycles ....... 40 Global...o........ 74 iv List of Figures Page Figure 1: Hypercubes of Different Dimensions ......... 21 Figure 2: Hypercube Properties

  3. Effect of acute lateral hemisection of the spinal cord on spinal neurons of postural networks.

    PubMed

    Zelenin, P V; Lyalka, V F; Orlovsky, G N; Deliagina, T G

    2016-12-17

    In quadrupeds, acute lateral hemisection of the spinal cord (LHS) severely impairs postural functions, which recover over time. Postural limb reflexes (PLRs) represent a substantial component of postural corrections in intact animals. The aim of the present study was to characterize the effects of acute LHS on two populations of spinal neurons (F and E) mediating PLRs. For this purpose, in decerebrate rabbits, responses of individual neurons from L5 to stimulation causing PLRs were recorded before and during reversible LHS (caused by temporal cold block of signal transmission in lateral spinal pathways at L1), as well as after acute surgical LHS at L1. Results obtained after Sur-LHS were compared to control data obtained in our previous study. We found that acute LHS caused disappearance of PLRs on the affected side. It also changed a proportion of different types of neurons on that side. A significant decrease and increase in the proportion of F- and non-modulated neurons, respectively, was found. LHS caused a significant decrease in most parameters of activity in F-neurons located in the ventral horn on the lesioned side and in E-neurons of the dorsal horn on both sides. These changes were caused by a significant decrease in the efficacy of posture-related sensory input from the ipsilateral limb to F-neurons, and from the contralateral limb to both F- and E-neurons. These distortions in operation of postural networks underlie the impairment of postural control after acute LHS, and represent a starting point for the subsequent recovery of postural functions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. New device to measure dynamic intrusion/extrusion cycles of lyophobic heterogeneous systems.

    PubMed

    Guillemot, Ludivine; Galarneau, Anne; Vigier, Gérard; Abensur, Thierry; Charlaix, Élisabeth

    2012-10-01

    Lyophobic heterogeneous systems (LHS) are made of mesoporous materials immersed in a non-wetting liquid. One application of LHS is the nonlinear damping of high frequency vibrations. The behaviour of LHS is characterized by P - ΔV cycles, where P is the pressure applied to the system, and ΔV its volume change due to the intrusion of the liquid into the pores of the material, or its extrusion out of the pores. Very few dynamic studies of LHS have been performed until now. We describe here a new apparatus that allows us to carry out dynamic intrusion/extrusion cycles with various liquid/porous material systems, controlling the temperature from ambient to 120 °C and the frequency from 0.01 to 20 Hz. We show that for two LHS: water/MTS and Galinstan/CPG, the energy dissipated during one cycle depends very weakly on the cycle frequency, in strong contrast to conventional dampers.

  5. Rasch Analysis of the Locus-of-Hope Scale. Brief Report

    ERIC Educational Resources Information Center

    Gadiana, Leny G.; David, Adonis P.

    2015-01-01

    The Locus-of-Hope Scale (LHS) was developed as a measure of the locus-of-hope dimensions (Bernardo, 2010). The present study adds to the emerging literature on locus-of-hope by assessing the psychometric properties of the LHS using Rasch analysis. The results from the Rasch analyses of the four subscales of LHS provided evidence on the…

  6. Laugier-Hunziker syndrome: A case report and review of the literature.

    PubMed

    Rangwala, Sophia; Doherty, Christy B; Katta, Rajani

    2010-12-15

    Laugier-Hunziker syndrome (LHS) is a rare acquired disorder characterized by diffuse macular hyperpigmentation of the oral mucosa and, at times, longitudinal melanonychia. Although LHS is considered a benign disease with no systemic manifestations or malignant potential, it is important to rule out other mucocutaneous pigmentary disorders that do require medical management. Prompt clinical recognition also averts the need for excessive and invasive procedures and treatments. To date, only four cases have been reported in the United States. We present a 77-year-old man who had clinical features typical of LHS and we then provide a review of the literature on LHS and its mimickers.

  7. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  8. Scale Invariance in Lateral Head Scans During Spatial Exploration.

    PubMed

    Yadav, Chetan K; Doreswamy, Yoganarasimha

    2017-04-14

    Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.

  9. Refractory Pigmentation Associated with Laugier-Hunziker Syndrome following Er:YAG Laser Treatment.

    PubMed

    Ergun, Sertan; Saruhanoğlu, Alp; Migliari, Dante-Antonio; Maden, Ilay; Tanyeri, Hakkı

    2013-01-01

    The present report describes a case of Laugier-Hunziker syndrome (LHS), a rare benign condition. A patient with LHS develops acquired melanotic pigmentation of the lips and buccal mucosa, often with pigmentation of the nails occurring. No systemic symptoms are associated with this syndrome. Normally, no treatment is required for this condition, unless for aesthetic reason, mainly due to pigmentation on the lip mucosa. We present a case of LHS, 37-year-old female, whose pigmentations on her lip and in the oral cavity were treated with an Er:YAG laser. At the postoperative 12th month followup, the lesions recurred. The effects of any surgical attempt to treat pigmentations associated with LHS were discussed.

  10. Scale Invariance in Lateral Head Scans During Spatial Exploration

    NASA Astrophysics Data System (ADS)

    Yadav, Chetan K.; Doreswamy, Yoganarasimha

    2017-04-01

    Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.

  11. Harsh Environments, Life History Strategies, and Adjustment: A Longitudinal Study of Oregon Youth

    PubMed Central

    Hampson, Sarah E.; Andrews, Judy A.; Barckley, Maureen; Gerrard, Meg; Gibbons, Frederick X.

    2015-01-01

    We modeled the effects of harsh environments in childhood on adjustment in early emerging adulthood, through parenting style and the development of fast Life History Strategies (LHS; risky beliefs and behaviors) in adolescence. Participants were from the Oregon Youth Substance Use Project (N = 988; 85.7% White). Five cohorts of children in Grades 1–5 at recruitment were assessed through one-year post high school. Greater environmental harshness (neighborhood quality and family poverty) in Grades 1–6 predicted less parental investment at Grade 8. This parenting style was related to the development of fast LHS (favorable beliefs about substance users and willingness to use substances at Grade 9, and engagement in substance use and risky sexual behavior assessed across Grades 10–12). The indirect path from harsh environment through parenting and LHS to (less) psychological adjustment (indicated by lower life satisfaction, self-rated health, trait sociability, and higher depression) was significant (indirect effect −.024, p = .011, 95% CI = −.043, −.006.). This chain of development was comparable to that found by Gibbons et al. (2012) for an African-American sample that, unlike the present study, included perceived racial discrimination in the assessment of harsh environment. PMID:26451065

  12. Surgical volume and conversion rate in laparoscopic hysterectomy: does volume matter? A multicenter retrospective cohort study.

    PubMed

    Keurentjes, José H M; Briët, Justine M; de Bock, Geertruida H; Mourits, Marian J E

    2018-02-01

    A multicenter, retrospective, cohort study was conducted in the Netherlands. The aim was to evaluate whether surgical volume of laparoscopic hysterectomies (LHs) performed by proven skilled gynecologists had an impact on the conversion rate from laparoscopy to laparotomy. In 14 hospitals, all LHs performed by 19 proven skilled gynecologists between 2007 and 2010 were included in the analysis. Surgical volume, conversion rate and type of conversion (reactive or strategic) were retrospectively assessed. To estimate the impact of surgical volume on the conversion rate, logistic regressions were performed. These regressions were adjusted for patient's age, Body Mass Index (BMI), ASA classification, previous abdominal surgery and the indication (malignant versus benign) for the LH. During the study period, 19 proven skilled gynecologists performed a total of 1051 LHs. Forty percent of the gynecologists performed over 20 LHs per year (median 17.3, range 5.4-49.5). Conversion to laparotomy occurred in 5.0% of all LHs (53 of 1051); 38 (3.6%) were strategic and 15 (1.4%) were reactive conversions. Performing over 20 LHs per year was significantly associated with a lower overall conversion rate (OR adjusted 0.43, 95% CI 0.24-0.77), a lower strategic conversion rate (OR adjusted 0.32, 95% CI 0.16-0.65), but not with a lower reactive conversion rate (OR adjusted 0.96, 95% CI 0.33-2.79). A higher annual surgical volume of LHs by proven skilled gynecologists is inversely related to the conversion rate to laparotomy, and results in a lower strategic conversion rate.

  13. Concurrent Image Processing Executive (CIPE). Volume 2: Programmer's guide

    NASA Technical Reports Server (NTRS)

    Williams, Winifred I.

    1990-01-01

    This manual is intended as a guide for application programmers using the Concurrent Image Processing Executive (CIPE). CIPE is intended to become the support system software for a prototype high performance science analysis workstation. In its current configuration CIPE utilizes a JPL/Caltech Mark 3fp Hypercube with a Sun-4 host. CIPE's design is capable of incorporating other concurrent architectures as well. CIPE provides a programming environment to applications' programmers to shield them from various user interfaces, file transactions, and architectural complexities. A programmer may choose to write applications to use only the Sun-4 or to use the Sun-4 with the hypercube. A hypercube program will use the hypercube's data processors and optionally the Weitek floating point accelerators. The CIPE programming environment provides a simple set of subroutines to activate user interface functions, specify data distributions, activate hypercube resident applications, and to communicate parameters to and from the hypercube.

  14. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    NASA Astrophysics Data System (ADS)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of simulations, the LHS-based meta-model yields a more robust predictive model, as verified by a k-fold cross-validation approach. In the third category (RMM), we use a reduced-order modeling procedure that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) for extrapolating system response at new control points from a limited number of trial runs ("snapshots"). We observe significant savings in computational time with very good accuracy from the POD-TPWL reduced order model - which could be important in the context of history matching, uncertainty quantification and optimization problems. The paper will present results from our ongoing investigations, and also discuss future research directions and likely outcomes. This work was supported by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0009051 and Ohio Department of Development grant D-13-02.

  15. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  16. Duplication and diversification of the LEAFY HULL STERILE1 and Oryza sativa MADS5 SEPALLATA lineages in graminoid Poales

    PubMed Central

    2012-01-01

    Background Gene duplication and the subsequent divergence in function of the resulting paralogs via subfunctionalization and/or neofunctionalization is hypothesized to have played a major role in the evolution of plant form. The LEAFY HULL STERILE1 (LHS1) SEPALLATA (SEP) genes have been linked with the origin and diversification of the grass spikelet, but it is uncertain 1) when the duplication event that produced the LHS1 clade and its paralogous lineage Oryza sativa MADS5 (OSM5) occurred, and 2) how changes in gene structure and/or expression might have contributed to subfunctionalization and/or neofunctionalization in the two lineages. Methods Phylogenetic relationships among 84 SEP genes were estimated using Bayesian methods. RNA expression patterns were inferred using in situ hybridization. The patterns of protein sequence and RNA expression evolution were reconstructed using maximum parsimony (MP) and maximum likelihood (ML) methods, respectively. Results Phylogenetic analyses mapped the LHS1/OSM5 duplication event to the base of the grass family. MP character reconstructions estimated a change from cytosine to thymine in the first codon position of the first amino acid after the Zea mays MADS3 (ZMM3) domain converted a glutamine to a stop codon in the OSM5 ancestor following the LHS1/OSM5 duplication event. RNA expression analyses of OSM5 co-orthologs in Avena sativa, Chasmanthium latifolium, Hordeum vulgare, Pennisetum glaucum, and Sorghum bicolor followed by ML reconstructions of these data and previously published analyses estimated a complex pattern of gain and loss of LHS1 and OSM5 expression in different floral organs and different flowers within the spikelet or inflorescence. Conclusions Previous authors have reported that rice OSM5 and LHS1 proteins have different interaction partners indicating that the truncation of OSM5 following the LHS1/OSM5 duplication event has resulted in both partitioned and potentially novel gene functions. The complex pattern of OSM5 and LHS1 expression evolution is not consistent with a simple subfunctionalization model following the gene duplication event, but there is evidence of recent partitioning of OSM5 and LHS1 expression within different floral organs of A. sativa, C. latifolium, P. glaucum and S. bicolor, and between the upper and lower florets of the two-flowered maize spikelet. PMID:22340849

  17. A discrete random walk on the hypercube

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyuan; Xiang, Yonghong; Sun, Weigang

    2018-03-01

    In this paper, we study the scaling for mean first-passage time (MFPT) of random walks on the hypercube and obtain a closed-form formula for the MFPT over all node pairs. We also determine the exponent of scaling efficiency characterizing the random walks and compare it with those of the existing networks. Finally we study the random walks on the hypercube with a located trap and provide a solution of the Kirchhoff index of the hypercube.

  18. Optimization of an Advanced Multi-Junction Solar-Cell Design for Space Environments (AM0) Using Nearly Orthogonal Latin Hypercubes

    DTIC Science & Technology

    2017-06-01

    AN ADVANCED MULTI-JUNCTION SOLAR -CELL DESIGN FOR SPACE ENVIRONMENTS (AM0) USING NEARLY ORTHOGONAL LATIN HYPERCUBES by Silvio Pueschel June...ADVANCED MULTI-JUNCTION SOLAR -CELL DESIGN FOR SPACE ENVIRONMENTS (AM0) USING NEARLY ORTHOGONAL LATIN HYPERCUBES 5. FUNDING NUMBERS 6. AUTHOR(S) Silvio...multi-junction solar cells with Silvaco Atlas simulation software. It introduces the nearly orthogonal Latin hypercube (NOLH) design of experiments (DoE

  19. Laugier–Hunziker syndrome: a report of three cases and literature review

    PubMed Central

    Wang, Wen-Mei; Wang, Xiang; Duan, Ning; Jiang, Hong-Liu; Huang, Xiao-Feng

    2012-01-01

    Laugier–Hunziker syndrome (LHS) is an acquired pigmentary condition affecting lips, oral mucosa and acral area, frequently associated with longitudinal melanonychia. There is neither malignant predisposition nor underlying systemic abnormality associated with LHS. Herein, we present three uncommon cases of LHS with possibly new feature of nail pigmentation, which were diagnosed during the past 2 years. We also review the clinical and histological findings, differential diagnosis, and treatment of the syndrome in published literature. PMID:23174847

  20. Severe neonatal hyperbilirubinaemia is frequently associated with long hospitalisation for emergency care in Nigeria.

    PubMed

    Olusanya, Bolajoko O; Mabogunje, Cecilia A; Imam, Zainab O; Emokpae, Abieyuwa A

    2017-12-01

    This study investigated the frequency and predictors of a long hospital stay (LHS) for severe neonatal hyperbilirubinaemia in Nigeria. Length of stay (LOS) for severe hyperbilirubinaemia was examined among neonates consecutively admitted to the emergency department of a children's hospital in Lagos from January 2013 to December 2014. The median LOS was used as the cut-off for LHS. Multivariate logistic regression determined the independent predictors of LHS based on demographic and clinical factors significantly associated with the log-transformed LOS in the bivariate analyses. We enrolled 622 hyperbilirubinaemic infants with a median age of four days (interquartile range 2-6 days) and 276 (44.4%) had LHS based on the median LOS of five days. Regardless of their birth place, infants were significantly more likely to have LHS if they were admitted in the first two days of life (p = 0.008) - especially with birth asphyxia - or had acute bilirubin encephalopathy (p = 0.001) and required one (p = 0.020) or repeat (p = 0.022) exchange transfusions. Infants who required repeat exchange transfusions had the highest odds for LHS (odds ratio 4.98, 95% confidence interval 1.26-19.76). Severe hyperbilirubinaemia was frequently associated with long hospitalisation in Nigeria, especially if neonates had birth asphyxia or required exchange transfusions. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  1. The origin of light hydrocarbons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mango, F.D.

    2000-04-01

    The light hydrocarbons (LHs) are probably intermediates in the catalytic decomposition of oil to gas. Two lines of evidence support this possibility. First, the reaction was duplicated experimentally under moderate conditions. Second, natural LHs exhibit the characteristics of catalytic products, in particular a proportionality between isomers: (xy{sub i})/(x{sub i}y) = {alpha} (where x and x{sub i} are isomers; y and y{sub i} are isomers that are structurally similar to x and x{sub i}; and {alpha} is a constant). All oils exhibit this relationship with coefficients of correlation reaching 0.99. Isomer ratios change systematically with concentrations, some approaching thermodynamic equilibrium, othersmore » not. The correlations reported are the strongest yet disclosed for the LHs. Isomers are related in triads (e.g., n-hexane {leftrightarrow} 2-methylpentane {leftrightarrow} 3-methylpentane), consistent with cyclopropane precursors. The LHs obtained experimentally are indistinguishable from natural LHs in (xy{sub i})/(x{sub i}y). These relationships are not explained by physical fractionations, equilibrium control, or noncatalytic modes of origin. A catalytic origin, on the other hand, has precedence, economy and experimental support.« less

  2. Developing an inverted Barrovian sequence; insights from monazite petrochronology

    NASA Astrophysics Data System (ADS)

    Mottram, Catherine M.; Warren, Clare J.; Regis, Daniele; Roberts, Nick M. W.; Harris, Nigel B. W.; Argles, Tom W.; Parrish, Randall R.

    2014-10-01

    In the Himalayan region of Sikkim, the well-developed inverted metamorphic sequence of the Main Central Thrust (MCT) zone is folded, thus exposing several transects through the structure that reached similar metamorphic grades at different times. In-situ LA-ICP-MS U-Th-Pb monazite ages, linked to pressure-temperature conditions via trace-element reaction fingerprints, allow key aspects of the evolution of the thrust zone to be understood for the first time. The ages show that peak metamorphic conditions were reached earliest in the structurally highest part of the inverted metamorphic sequence, in the Greater Himalayan Sequence (GHS) in the hanging wall of the MCT. Monazite in this unit grew over a prolonged period between ∼37 and 16 Ma in the southerly leading-edge of the thrust zone and between ∼37 and 14.5 Ma in the northern rear-edge of the thrust zone, at peak metamorphic conditions of ∼790 °C and 10 kbar. Monazite ages in Lesser Himalayan Sequence (LHS) footwall rocks show that identical metamorphic conditions were reached ∼4-6 Ma apart along the ∼60 km separating samples along the MCT transport direction. Upper LHS footwall rocks reached peak metamorphic conditions of ∼655 °C and 9 kbar between ∼21 and 16 Ma in the more southerly-exposed transect and ∼14.5-12 Ma in the northern transect. Similarly, lower LHS footwall rocks reached peak metamorphic conditions of ∼580 °C and 8.5 kbar at ∼16 Ma in the south, and 9-10 Ma in the north. In the southern transect, the timing of partial melting in the GHS hanging wall (∼23-19.5 Ma) overlaps with the timing of prograde metamorphism (∼21 Ma) in the LHS footwall, confirming that the hanging wall may have provided the heat necessary for the metamorphism of the footwall. Overall, the data provide robust evidence for progressively downwards-penetrating deformation and accretion of original LHS footwall material to the GHS hanging wall over a period of ∼5 Ma. These processes appear to have occurred several times during the prolonged ductile evolution of the thrust. The preserved inverted metamorphic sequence therefore documents the formation of sequential 'paleo-thrusts' through time, cutting down from the original locus of MCT movement at the LHS-GHS protolith boundary and forming at successively lower pressure and temperature conditions. The petrochronologic methods applied here constrain a complex temporal and thermal deformation history, and demonstrate that inverted metamorphic sequences can preserve a rich record of the duration of progressive ductile thrusting.

  3. Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces

    NASA Astrophysics Data System (ADS)

    Aichinger, Julia; Schwieger, Volker

    2018-04-01

    This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.

  4. Ambulatory laparoscopic minor hepatic surgery: Retrospective observational study.

    PubMed

    Gaillard, M; Tranchart, H; Lainas, P; Tzanis, D; Franco, D; Dagher, I

    2015-11-01

    Over the last decade, laparoscopic hepatic surgery (LHS) has been increasingly performed throughout the world. Meanwhile, ambulatory surgery has been developed and implemented with the aims of improving patient satisfaction and reducing health care costs. The objective of this study was to report our preliminary experience with ambulatory minimally invasive LHS. Between 1999 and 2014, 172 patients underwent LHS at our institution, including 151 liver resections and 21 fenestrations of hepatic cysts. The consecutive series of highly selected patients who underwent ambulatory LHS were included in this study. Twenty patients underwent ambulatory LHS. Indications were liver cysts in 10 cases, liver angioma in 3 cases, focal nodular hyperplasia in 3 cases, and colorectal hepatic metastasis in 4 cases. The median operative time was 92 minutes (range: 50-240 minutes). The median blood loss was 35 mL (range: 20-150 mL). There were no postoperative complications or re-hospitalizations. All patients were hospitalized after surgery in our ambulatory surgery unit, and were discharged 5-7 hours after surgery. The median postoperative pain score at the time of discharge was 3 (visual analogue scale: 0-10; range: 0-4). The median quality-of-life score at the first postoperative visit was 8 (range: 6-10) and the median cosmetic satisfaction score was 8 (range: 7-10). This series shows that, in selected patients, ambulatory LHS is feasible and safe for minor hepatic procedures. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. Physically motivated correlation formalism in hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Roy, Ankita; Rafert, J. Bruce

    2004-05-01

    Most remote sensing data-sets contain a limiting number of independent spatial and spectral measurements, beyond which no effective increase in information is achieved. This paper presents a Physically Motivated Correlation Formalism (PMCF) ,which places both Spatial and Spectral data on an equivalent mathematical footing in the context of a specific Kernel, such that, optimal combinations of independent data can be selected from the entire Hypercube via the method of "Correlation Moments". We present an experimental and computational analysis of Hyperspectral data sets using the Michigan Tech VFTHSI [Visible Fourier Transform Hyperspectral Imager] based on a Sagnac Interferometer, adjusted to obtain high SNR levels. The captured Signal Interferograms of different targets - aerial snaps of Houghton and lab-based data (white light , He-Ne laser , discharge tube sources) with the provision of customized scan of targets with the same exposures are processed using inverse imaging transformations and filtering techniques to obtain the Spectral profiles and generate Hypercubes to compute Spectral/Spatial/Cross Moments. PMCF answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required for a particular target recognition.

  6. Parallel and fault-tolerant algorithms for hypercube multiprocessors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykanat, C.

    1988-01-01

    Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less

  7. Toward a Learning Health-care System – Knowledge Delivery at the Point of Care Empowered by Big Data and NLP

    PubMed Central

    Kaggal, Vinod C.; Elayavilli, Ravikumar Komandur; Mehrabi, Saeed; Pankratz, Joshua J.; Sohn, Sunghwan; Wang, Yanshan; Li, Dingcheng; Rastegar, Majid Mojarad; Murphy, Sean P.; Ross, Jason L.; Chaudhry, Rajeev; Buntrock, James D.; Liu, Hongfang

    2016-01-01

    The concept of optimizing health care by understanding and generating knowledge from previous evidence, ie, the Learning Health-care System (LHS), has gained momentum and now has national prominence. Meanwhile, the rapid adoption of electronic health records (EHRs) enables the data collection required to form the basis for facilitating LHS. A prerequisite for using EHR data within the LHS is an infrastructure that enables access to EHR data longitudinally for health-care analytics and real time for knowledge delivery. Additionally, significant clinical information is embedded in the free text, making natural language processing (NLP) an essential component in implementing an LHS. Herein, we share our institutional implementation of a big data-empowered clinical NLP infrastructure, which not only enables health-care analytics but also has real-time NLP processing capability. The infrastructure has been utilized for multiple institutional projects including the MayoExpertAdvisor, an individualized care recommendation solution for clinical care. We compared the advantages of big data over two other environments. Big data infrastructure significantly outperformed other infrastructure in terms of computing speed, demonstrating its value in making the LHS a possibility in the near future. PMID:27385912

  8. Toward a Learning Health-care System - Knowledge Delivery at the Point of Care Empowered by Big Data and NLP.

    PubMed

    Kaggal, Vinod C; Elayavilli, Ravikumar Komandur; Mehrabi, Saeed; Pankratz, Joshua J; Sohn, Sunghwan; Wang, Yanshan; Li, Dingcheng; Rastegar, Majid Mojarad; Murphy, Sean P; Ross, Jason L; Chaudhry, Rajeev; Buntrock, James D; Liu, Hongfang

    2016-01-01

    The concept of optimizing health care by understanding and generating knowledge from previous evidence, ie, the Learning Health-care System (LHS), has gained momentum and now has national prominence. Meanwhile, the rapid adoption of electronic health records (EHRs) enables the data collection required to form the basis for facilitating LHS. A prerequisite for using EHR data within the LHS is an infrastructure that enables access to EHR data longitudinally for health-care analytics and real time for knowledge delivery. Additionally, significant clinical information is embedded in the free text, making natural language processing (NLP) an essential component in implementing an LHS. Herein, we share our institutional implementation of a big data-empowered clinical NLP infrastructure, which not only enables health-care analytics but also has real-time NLP processing capability. The infrastructure has been utilized for multiple institutional projects including the MayoExpertAdvisor, an individualized care recommendation solution for clinical care. We compared the advantages of big data over two other environments. Big data infrastructure significantly outperformed other infrastructure in terms of computing speed, demonstrating its value in making the LHS a possibility in the near future.

  9. Implementation of laparoscopic hysterectomy: maintenance of skills after a mentorship program.

    PubMed

    Twijnstra, A R H; Blikkendaal, M D; Kolkman, W; Smeets, M J G H; Rhemrev, J P T; Jansen, F W

    2010-01-01

    To evaluate the implementation and maintenance of advanced laparoscopic skills after a structured mentorship program in laparoscopic hysterectomy (LH). Cohort retrospective analysis of 104 successive LHs performed by two gynecologists during and after a mentorship program. LHs were compared for indication, patient characteristics and intraoperative characteristics. As a frame of reference, 94 LHs performed by the mentor were analyzed. With regard to indication, blood loss and adverse outcomes, both trainees performed LHs during their mentorship program comparable with the LHs performed by the mentor. The difference in mean operating time between trainees and mentor was not clinically significant. Both trainees progressed along a learning curve, while operating time remained statistically constant and comparable to that of the mentor. After completing the mentorship program, both gynecologists maintained their acquired skills as blood loss, adverse outcome rates and operating time were comparable with the results during their traineeship. A mentorship program is an effective and durable tool for implementing a new surgical procedure in a teaching hospital with respect to patient safety aspects, as indications, operating time and adverse outcome rates are comparable to those of the mentor in his own hospital during and after completing the mentorship program. Copyright © 2010 S. Karger AG, Basel.

  10. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  11. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.

  12. High-throughput Raman chemical imaging for evaluating food safety and quality

    NASA Astrophysics Data System (ADS)

    Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.

    2014-05-01

    A line-scan hyperspectral system was developed to enable Raman chemical imaging for large sample areas. A custom-designed 785 nm line-laser based on a scanning mirror serves as an excitation source. A 45° dichroic beamsplitter reflects the laser light to form a 24 cm x 1 mm excitation line normally incident on the sample surface. Raman signals along the laser line are collected by a detection module consisting of a dispersive imaging spectrograph and a CCD camera. A hypercube is accumulated line by line as a motorized table moves the samples transversely through the laser line. The system covers a Raman shift range of -648.7-2889.0 cm-1 and a 23 cm wide area. An example application, for authenticating milk powder, was presented to demonstrate the system performance. In four minutes, the system acquired a 512x110x1024 hypercube (56,320 spectra) from four 47-mm-diameter Petri dishes containing four powder samples. Chemical images were created for detecting two adulterants (melamine and dicyandiamide) that had been mixed into the milk powder.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andrew; Lawrence, Earl

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less

  14. Simplified predictive models for CO 2 sequestration performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Ganesh, Priya; Schuetter, Jared

    CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessmentmore » modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.« less

  15. [Lateral hostilities among emergency and critical care nurses. Survey in five hospitals of the Tuscany Region].

    PubMed

    Bambi, Stefano; Becattini, Giovanni; Pronti, Fabio; Lumini, Enrico; Rasero, Laura

    2013-01-01

    Lateral hostilities among emergency and critical care nurses. Survey in five hospitals of Tuscany Region. Introduction. Lateral hostilities (LHs) are a kind of workplace violence. They are defined as varieties of cruel, rude, antagonistic interactions between people at the same hierarchical level. Nurses are affected by LH from 5.7% to 65%, leading to reduced work motivation, psycho-physical disorders, and in some cases, drop out of the nursing profession. Objective. To quantify the LHs among nurses in the emergency departments (ED) and intensive care units (ICU) in 5 hospitals of Tuscany (Italy). To show the impact on the quality of their psycho-physical and professional lives. Method. Exploratory-descriptive study, through closed-ended questionnaire. Results. 360/444 nurses (81%); 294 (81.6%) were victims of LHs during the past 12 months. Gossiping, complaints shared with others without discussing with the concerned person, and sarcastic comments were the most reported LHs. LHs occur more in EDs than ICUs (respectively 90% and 77%; p=0.0038). No statistically significant differences were observed for gender, age, or years of experience. The 17.7% of nurses asked to be moved from the ward, and 6.9% left it; 6.9% respondents had thought to leave the nursing profession; 235 (65.2%) experienced at least one LHs related disorder during the last year. Most reported symptoms were low morale, anxiety, and sleep disturbances. Conclusions. The incidence of LH and related disorders is high in EDs and ICUs, determining a low professional and psycho-physical quality of life.

  16. Equine ulnar fracture repair with locking compression plates can be associated with inadvertent penetration of the lateral cortex of the radius.

    PubMed

    Kuemmerle, Jan M; Kühn, Karolin; Bryner, Marco; Fürst, Anton E

    2013-10-01

    To evaluate if the use of locking head screws (LHS) in the distal holes of a locking compression plate (LCP) applied to the caudal aspect of the ulna to treat equine ulnar fractures is associated with a risk of injury to the lateral cortex of the radius. Controlled laboratory study. Cadaveric equine forelimbs (n = 8 pair). After transverse ulnar osteotomy, osteosynthesis was performed with a narrow 10-13 hole 4.5/5.0 LCP applied to the caudal aspect of each ulna. The distal 3 holes were filled with 4.5 mm cortex screws (CS) in 1 limb (group 1) and with 5.0 mm LHS contralaterally (group 2). CS were inserted in an angle deemed appropriate by the surgeon and LHS were inserted perpendicular to the plate. Implant position and injury to the lateral cortex of the radius were assessed by radiography, CT, and limb dissection. In group 1, injury of the lateral radius cortex did not occur. In group 2, 4 limbs and 6/24 LHS were associated with injury of the lateral radius cortex by penetration of a LHS. This difference was statistically significant. CS were inserted with a mean angle of 17.6° from the sagittal plane in a caudolateral-craniomedial direction. Use of LHS in the distal part of a LCP applied to the caudal aspect of the ulna is associated with a risk of inadvertent injury to the lateral cortex of the radius. © Copyright 2013 by The American College of Veterinary Surgeons.

  17. On the Psychometric Study of Human Life History Strategies.

    PubMed

    Richardson, George B; Sanning, Blair K; Lai, Mark H C; Copping, Lee T; Hardesty, Patrick H; Kruger, Daniel J

    2017-01-01

    This article attends to recent discussions of validity in psychometric research on human life history strategy (LHS), provides a constructive critique of the extant literature, and describes strategies for improving construct validity. To place the psychometric study of human LHS on more solid ground, our review indicates that researchers should (a) use approaches to psychometric modeling that are consistent with their philosophies of measurement, (b) confirm the dimensionality of life history indicators, and (c) establish measurement invariance for at least a subset of indicators. Because we see confirming the dimensionality of life history indicators as the next step toward placing the psychometrics of human LHS on more solid ground, we use nationally representative data and structural equation modeling to test the structure of middle adult life history indicators. We found statistically independent mating competition and Super-K dimensions and the effects of parental harshness and childhood unpredictability on Super-K were consistent with past research. However, childhood socioeconomic status had a moderate positive effect on mating competition and no effect on Super-K, while unpredictability did not predict mating competition. We conclude that human LHS is more complex than previously suggested-there does not seem to be a single dimension of human LHS among Western adults and the effects of environmental components seem to vary between mating competition and Super-K.

  18. Learned Helplessness and Sexual Risk Taking in Adolescent and Young Adult African American Females.

    PubMed

    Pittiglio, Laura

    2017-08-01

    Research involving adolescent and young African American (AA) females has demonstrated that they face uncontrollable obstacles which can interfere with the negotiation of safer sexual behaviors. If these obstacles are perceived as uncontrollable, then these females may be at risk for the development of Learned Helplessness (LH). As the LH model predicts, if these obstacles are believed not to be in their control, it may lead to deficits in motivational or cognitive decision-making, deficits that could certainly influence their sexual risk taking behaviors. Therefore, the primary objective for this pilot study was to trial the Learned Helplessness Scale (LHS) to examine the perceptions of LH in this population. A convenience sample of 50 adolescent and young AA females between the ages of 16 and 21 were recruited from two clinics in Southeast Michigan. Scores on the LHS ranged from 20 to 57, with a mean score of 39.1 (standard deviation = 10.49). The higher range of scores in the sample demonstrates a continuum of LH among the participants in the study.

  19. Duration of hospital stay following orthognathic surgery at the jordan university hospital.

    PubMed

    Jarab, Fadi; Omar, Esam; Bhayat, Ahmed; Mansuri, Samir; Ahmed, Sami

    2012-09-01

    Major oral and maxillofacial surgery procedures have been routinely performed on an inpatient basis in order to manage both, the recovery from anesthesia and any unpredictable morbidity that may be associated with the surgery. The use of inpatient beds is extremely expensive and if the surgical procedures could be done on an outpatient setting, it would reduce the costs and the need for inpatient care. The aim was to determine the length of hospital stay (LHS) and the factors which influence the LHS following orthognathic surgery at the Jordan University Hospital over 5 years (2005-2009). This was a retrospective record review of patients who underwent orthognathic surgery at Jordan University Hospital between 2005 and 2009. The variables were recorded on a data capture form which was adapted and developed from previous studies. Descriptive and analytical statistical methods were used to correlate these variables to the LHS. Ninety two patients were included in the study and 74% of them were females. The mean age was 23.7 years and the mean LHS was 4 days. The complexity of the procedure, length of operation time, intensive care unit (ICU) stay and year of operation were significantly correlated with a positive LHS (P < 0.05). Patients' hospital stay was directly related to the complexity of the orthognathic procedure, the operation time, time spent in ICU and the year in which the operation was done. There was a significant reduction in the LHS over the progressing years and this could be due to an increase in experience and knowledge of the operators and an improvement in the hospital facilities.

  20. Benchmark Transiting Brown Dwarf LHS 6343 C: Spitzer Secondary Eclipse Observations Yield Brightness Temperature and Mid-T Spectral Class

    NASA Astrophysics Data System (ADS)

    Montet, Benjamin T.; Johnson, John Asher; Fortney, Jonathan J.; Desert, Jean-Michel

    2016-05-01

    There are no field brown dwarf analogs with measured masses, radii, and luminosities, precluding our ability to connect the population of transiting brown dwarfs with measurable masses and radii and field brown dwarfs with measurable luminosities and atmospheric properties. LHS 6343 C, a weakly irradiated brown dwarf transiting one member of an M+M binary in the Kepler field, provides the first opportunity to probe the atmosphere of a non-inflated brown dwarf with a measured mass and radius. Here, we analyze four Spitzer observations of secondary eclipses of LHS 6343 C behind LHS 6343 A. Jointly fitting the eclipses with a Gaussian process noise model of the instrumental systematics, we measure eclipse depths of 1.06 ± 0.21 ppt at 3.6 μm and 2.09 ± 0.08 ppt at 4.5 μm, corresponding to brightness temperatures of 1026 ± 57 K and 1249 ± 36 K, respectively. We then apply brown dwarf evolutionary models to infer a bolometric luminosity {log}({L}\\star /{L}⊙ )=-5.16+/- 0.04. Given the known physical properties of the brown dwarf and the two M dwarfs in the LHS 6343 system, these depths are consistent with models of a 1100 K T dwarf at an age of 5 Gyr and empirical observations of field T5-6 dwarfs with temperatures of 1070 ± 130 K. We investigate the possibility that the orbit of LHS 6343 C has been altered by the Kozai-Lidov mechanism and propose additional astrometric or Rossiter-McLaughlin measurements of the system to probe the dynamical history of the system.

  1. Interactions between Kar2p and Its Nucleotide Exchange Factors Sil1p and Lhs1p Are Mechanistically Distinct*

    PubMed Central

    Hale, Sarah J.; Lovell, Simon C.; de Keyzer, Jeanine; Stirling, Colin J.

    2010-01-01

    Kar2p, an essential Hsp70 chaperone in the endoplasmic reticulum of Saccharomyces cerevisiae, facilitates the transport and folding of nascent polypeptides within the endoplasmic reticulum lumen. The chaperone activity of Kar2p is regulated by its intrinsic ATPase activity that can be stimulated by two different nucleotide exchange factors, namely Sil1p and Lhs1p. Here, we demonstrate that the binding requirements for Lhs1p are complex, requiring both the nucleotide binding domain plus the linker domain of Kar2p. In contrast, the IIB domain of Kar2p is sufficient for binding of Sil1p, and point mutations within IIB specifically blocked Sil1p-dependent activation while remaining competent for activation by Lhs1p. Taken together, these results demonstrate that the interactions between Kar2p and its two nucleotide exchange factors can be functionally resolved and are thus mechanistically distinct. PMID:20430899

  2. Detection of tectonometamorphic discontinuities within the Himalayan orogen: Structural and petrological constraints from the Rasuwa district, central Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Rapa, Giulia; Mosca, Pietro; Groppo, Chiara; Rolfo, Franco

    2018-06-01

    A detailed structural, lithological and petrological study of different transects in the Rasuwa district of central Nepal Himalaya allows the characterization of the tectonostratigraphic architecture of the area. It also facilitates constraining the P-T evolution of the different units within the Lesser (LHS) and Greater (GHS) Himalayan Sequences. Peak P-T conditions obtained for the studied metapelite samples using the pseudosection approach and the Average PT method highlight the existence of four different T/P ratio populations in different tectonometamorphic units: 80 ± 11 °C/kbar (LHS), 66 ± 7 °C/kbar (RTS), 73 ± 1 °C/kbar (Lower-GHS) and 101 ± 12 °C/kbar (Upper-GHS). Integration of structural and petrological data emphasizes the existence of three tectonometamorphic discontinuities bounding these units, characterized by top-to-the-south sense of shear: the Ramgarh Thrust, which separates the LHS (peak metamorphism at ∼600 °C, 7.5 kbar) from the overlying RTS (peak metamorphism at ∼635 °C, 10 kbar); the Main Central Thrust, which separates the RTS from the Lower-GHS (peak at 700-740 °C, 9.5-10.5 kbar with a prograde increase in both P and T in the kyanite stability field), and the Langtang Thrust, which juxtaposes the Upper-GHS (peak at 780-800 °C, 7.5-8.0 kbar with a nearly isobaric heating in the sillimanite stability field) onto the Lower-GHS. An increase in the intensity of deformation, with development of pervasive mylonitic fabrics and/or shear zones, is generally observed approaching the discontinuities from either side. Overall, data and results presented in this paper demonstrate that petrological and structural analysis combined together, are reliable methods adequate to identify tectonometamorphic discontinuities in both the LHS and GHS. Geochronological data from the literature allow the interpretation of these discontinuities as in-sequence thrusts.

  3. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.

    1988-01-01

    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).

  4. Optimization of High-Dimensional Functions through Hypercube Evaluation

    PubMed Central

    Abiyev, Rahib H.; Tunay, Mustafa

    2015-01-01

    A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237

  5. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  6. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  7. Toward a science of learning systems: a research agenda for the high-functioning Learning Health System.

    PubMed

    Friedman, Charles; Rubin, Joshua; Brown, Jeffrey; Buntin, Melinda; Corn, Milton; Etheredge, Lynn; Gunter, Carl; Musen, Mark; Platt, Richard; Stead, William; Sullivan, Kevin; Van Houweling, Douglas

    2015-01-01

    The capability to share data, and harness its potential to generate knowledge rapidly and inform decisions, can have transformative effects that improve health. The infrastructure to achieve this goal at scale--marrying technology, process, and policy--is commonly referred to as the Learning Health System (LHS). Achieving an LHS raises numerous scientific challenges. The National Science Foundation convened an invitational workshop to identify the fundamental scientific and engineering research challenges to achieving a national-scale LHS. The workshop was planned by a 12-member committee and ultimately engaged 45 prominent researchers spanning multiple disciplines over 2 days in Washington, DC on 11-12 April 2013. The workshop participants collectively identified 106 research questions organized around four system-level requirements that a high-functioning LHS must satisfy. The workshop participants also identified a new cross-disciplinary integrative science of cyber-social ecosystems that will be required to address these challenges. The intellectual merit and potential broad impacts of the innovations that will be driven by investments in an LHS are of great potential significance. The specific research questions that emerged from the workshop, alongside the potential for diverse communities to assemble to address them through a 'new science of learning systems', create an important agenda for informatics and related disciplines. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  8. Toward a science of learning systems: a research agenda for the high-functioning Learning Health System

    PubMed Central

    Friedman, Charles; Rubin, Joshua; Brown, Jeffrey; Buntin, Melinda; Corn, Milton; Etheredge, Lynn; Gunter, Carl; Musen, Mark; Platt, Richard; Stead, William; Sullivan, Kevin; Van Houweling, Douglas

    2015-01-01

    Objective The capability to share data, and harness its potential to generate knowledge rapidly and inform decisions, can have transformative effects that improve health. The infrastructure to achieve this goal at scale—marrying technology, process, and policy—is commonly referred to as the Learning Health System (LHS). Achieving an LHS raises numerous scientific challenges. Materials and methods The National Science Foundation convened an invitational workshop to identify the fundamental scientific and engineering research challenges to achieving a national-scale LHS. The workshop was planned by a 12-member committee and ultimately engaged 45 prominent researchers spanning multiple disciplines over 2 days in Washington, DC on 11–12 April 2013. Results The workshop participants collectively identified 106 research questions organized around four system-level requirements that a high-functioning LHS must satisfy. The workshop participants also identified a new cross-disciplinary integrative science of cyber-social ecosystems that will be required to address these challenges. Conclusions The intellectual merit and potential broad impacts of the innovations that will be driven by investments in an LHS are of great potential significance. The specific research questions that emerged from the workshop, alongside the potential for diverse communities to assemble to address them through a ‘new science of learning systems’, create an important agenda for informatics and related disciplines. PMID:25342177

  9. Analysis of hepatitis C viral dynamics using Latin hypercube sampling

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2012-12-01

    We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.

  10. Uncertainty quantification of resonant ultrasound spectroscopy for material property and single crystal orientation estimation on a complex part

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.

  11. FORMAL UNCERTAINTY ANALYSIS OF A LAGRANGIAN PHOTOCHEMICAL AIR POLLUTION MODEL. (R824792)

    EPA Science Inventory

    This study applied Monte Carlo analysis with Latin
    hypercube sampling to evaluate the effects of uncertainty
    in air parcel trajectory paths, emissions, rate constants,
    deposition affinities, mixing heights, and atmospheric stability
    on predictions from a vertically...

  12. Macular pigmentation complicating irritant contact dermatitis and viral warts in Laugier-Hunziker syndrome.

    PubMed

    Bhoyrul, B; Paulus, J

    2016-04-01

    Laugier-Hunziker syndrome (LHS) is a rare acquired disorder characterized by macu-lar pigmentation of the lips and oral mucosa, with frequent longitudinal melanonychia. Involvement of other areas, such as the genitalia and fingers, has rarely been described. LHS is a benign condition with no known systemic manifestations. We report the case of a woman who developed melanotic macules on her fingers and elbow 16 years after the onset of pigmentation of her lips. This unusual feature of LHS in our patient was associated with irritant contact dermatitis and viral warts. Only two cases of an association with an inflammatory dermatosis have been reported previously in the literature. © 2015 British Association of Dermatologists.

  13. The Mark III Hypercube-Ensemble Computers

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe

    1988-01-01

    Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.

  14. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  15. Regional process redesign of lung cancer care: a learning health system pilot project.

    PubMed

    Fung-Kee-Fung, M; Maziak, D E; Pantarotto, J R; Smylie, J; Taylor, L; Timlin, T; Cacciotti, T; Villeneuve, P J; Dennie, C; Bornais, C; Madore, S; Aquino, J; Wheatley-Price, P; Ozer, R S; Stewart, D J

    2018-02-01

    The Ottawa Hospital (toh) defined delay to timely lung cancer care as a system design problem. Recognizing the patient need for an integrated journey and the need for dynamic alignment of providers, toh used a learning health system (lhs) vision to redesign regional diagnostic processes. A lhs is driven by feedback utilizing operational and clinical information to drive system optimization and innovation. An essential component of a lhs is a collaborative platform that provides connectivity across silos, organizations, and professions. To operationalize a lhs, we developed the Ottawa Health Transformation Model (ohtm) as a consensus approach that addresses process barriers, resistance to change, and conflicting priorities. A regional Community of Practice (cop) was established to engage stakeholders, and a dedicated transformation team supported process improvements and implementation. The project operationalized the lung cancer diagnostic pathway and optimized patient flow from referral to initiation of treatment. Twelve major processes in referral, review, diagnostics, assessment, triage, and consult were redesigned. The Ottawa Hospital now provides a diagnosis to 80% of referrals within the provincial target of 28 days. The median patient journey from referral to initial treatment decreased by 48% from 92 to 47 days. The initiative optimized regional integration from referral to initial treatment. Use of a lhs lens enabled the creation of a system that is standardized to best practice and open to ongoing innovation. Continued transformation initiatives across the continuum of care are needed to incorporate best practice and optimize delivery systems for regional populations.

  16. Microfluidics on liquid handling stations (μF-on-LHS): a new industry-compatible microfluidic platform

    NASA Astrophysics Data System (ADS)

    Kittelmann, Jörg; Radtke, Carsten P.; Waldbaur, Ansgar; Neumann, Christiane; Hubbuch, Jürgen; Rapp, Bastian E.

    2014-03-01

    Since the early days microfluidics as a scientific discipline has been an interdisciplinary research field with a wide scope of potential applications. Besides tailored assays for point-of-care (PoC) diagnostics, microfluidics has been an important tool for large-scale screening of reagents and building blocks in organic chemistry, pharmaceutics and medical engineering. Furthermore, numerous potential marketable products have been described over the years. However, especially in industrial applications, microfluidics is often considered only an alternative technology for fluid handling, a field which is industrially mostly dominated by large-scale numerically controlled fluid and liquid handling stations. Numerous noteworthy products have dominated this field in the last decade and have been inhibited the widespread application of microfluidics technology. However, automated liquid handling stations and microfluidics do not have to be considered as mutually exclusive approached. We have recently introduced a hybrid fluidic platform combining an industrially established liquid handling station and a generic microfluidic interfacing module that allows probing a microfluidic system (such as an essay or a synthesis array) using the instrumentation provided by the liquid handling station. We term this technology "Microfluidic on Liquid Handling Stations (μF-on-LHS)" - a classical "best of both worlds"- approach that allows combining the highly evolved, automated and industry-proven LHS systems with any type of microfluidic assay. In this paper we show, to the best of our knowledge, the first droplet microfluidics application on an industrial LHS using the μF-on-LHS concept.

  17. Comprehensive comparison of the chemical and structural characterization of landfill leachate and leonardite humic fractions.

    PubMed

    Tahiri, Abdelghani; Richel, Aurore; Destain, Jacqueline; Druart, Philippe; Thonart, Philippe; Ongena, Marc

    2016-03-01

    Humic substances (HS) are complex and heterogeneous mixtures of organic compounds that occur everywhere in the environment. They represent most of the dissolved organic matter in soils, sediments (fossil), water, and landfills. The exact structure of HS macromolecules has not yet been determined because of their complexity and heterogeneity. Various descriptions of HS are used depending on specific environments of origin and research interests. In order to improve the understanding of the structure of HS extracted from landfill leachate (LHS) and commercial HS from leonardite (HHS), this study sought to compare the composition and characterization of the structure of LHS and HHS using elemental composition, chromatographic (high-performance liquid chromatography (HPLC)), and spectroscopic techniques (UV-vis, FTIR, NMR, and MALDI-TOF). The results showed that LHS molecules have a lower molecular weight and less aromatic structure than HHS molecules. The characteristics of functional groups of both LHS and HHS, however, were basically similar, but there was some differences in absorbance intensity. There were also less aliphatic and acidic functional groups and more aromatic and polyphenolic compounds in the humic acid (HA) fraction than in the fulvic acid (FA) and other molecules (OM) fractions of both origins. The differences between LHS and HHS might be due to the time course of humification. Combining the results obtained from these analytical techniques cold improve our understanding of the structure of HS of different origins and thus enhance their potential use.

  18. Socioeconomic status, cognition, and hippocampal sclerosis.

    PubMed

    Baxendale, Sallie; Heaney, Dominic

    2011-01-01

    Poorer surgical outcomes in patients with low socioeconomic status have previously been reported, but the mechanisms underlying this pattern are unknown. Lower socioeconomic status may be a proxy marker for the limited economic opportunities associated with compromised cognitive function. The aim of this study was to examine the preoperative neuropsychological characteristics of patients with unilateral hippocampal sclerosis (HS) and their relationship to socioeconomic status. Two hundred ninety-two patients with medically intractable temporal lobe epilepsy and unilateral HS completed tests of memory and intellectual function prior to surgery. One hundred thirty-one had right HS (RHS), and 161 had left HS (LHS). The socioeconomic status of each participant was determined via the Index of Multiple Deprivation (IMD) associated with their postcode. The IMD was not associated with age at the time of assessment, age at onset of epilepsy, or duration of active epilepsy. The RHS and LHS groups did not differ on the IMD. The IMD was negatively correlated with all neuropsychological test scores in the LHS group. In the RHS group, the IMD was not significantly correlated with any of the neuropsychological measures. There were no significant correlations in the RHS group. Regression analyses suggested that IMD score explained 3% of variance in the measures of intellect, but 8% of the variance in verbal learning in the LHS group. The IMD explained 1% or less of the variance in neuropsychological scores in the RHS group. Controlling for overall level of intellectual function, the IMD score explained a small but significant proportion of the variance in verbal learning in the LHS group and visual learning for the RHS group. Our findings suggest that patients living in an area with a high IMD enter surgery with greater focal deficits associated with their epilepsy and more widespread cognitive deficits if they have LHS. Further work is needed to establish the direction of the relationship between low socioeconomic status and the neurocognitive sequelae of epilepsy. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Hyperswitch Network For Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Chow, Edward; Madan, Herbert; Peterson, John

    1989-01-01

    Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.

  20. Rural GRATEFUL MED outreach: project results, impact, and future needs.

    PubMed Central

    Dorsch, J L; Landwirth, T K

    1993-01-01

    The Library of the Health Sciences-Peoria (LHS-Peoria), located at a regional site of the University of Illinois College of Medicine, conducted an eighteen-month GRATEFUL MED outreach project funded by the National Library of Medicine. The project was designed to enhance information services for health professionals at eight underserved rural hospitals in west central Illinois. One hundred rural health professionals, mainly nonphysicians, received GRATEFUL MED training at these hospitals; LHS delivered more than 350 documents to the trainees. In this paper, investigators describe the project and its goals and discuss results and their evaluation, from both individual and institutional perspectives. Outcome is examined in the context of future outreach plans, both at LHS and elsewhere. PMID:8251973

  1. Photometric transit search for planets around cool stars from the western Italian Alps: a pilot study

    NASA Astrophysics Data System (ADS)

    Giacobbe, P.; Damasso, M.; Sozzetti, A.; Toso, G.; Perdoncin, M.; Calcidese, P.; Bernagozzi, A.; Bertolini, E.; Lattanzi, M. G.; Smart, R. L.

    2012-08-01

    We present the results of a year-long photometric monitoring campaign of a sample of 23 nearby (d < 60 pc), bright (J < 12) dM stars carried out at the Astronomical Observatory of the Autonomous Region of the Aosta Valley, in the western Italian Alps. This programme represents a 'pilot study' for a long-term photometric transit search for planets around a large sample of nearby M dwarfs, due to start with an array of identical 40-cm class telescopes by the Spring of 2012. In this study, we set out to (i) demonstrate the sensitivity to <4 R⊕ transiting planets with periods of a few days around our programme stars, through a two-fold approach that combines a characterization of the statistical noise properties of our photometry with the determination of transit detection probabilities via simulations; and (ii) where possible, improve our knowledge of some astrophysical properties (e.g. activity, rotation) of our targets by combining spectroscopic information and our differential photometric measurements. We achieve a typical nightly root mean square (RMS) photometric precision of ˜5 mmag, with little or no dependence on the instrumentation used or on the details of the adopted methods for differential photometry. The presence of correlated (red) noise in our data degrades the precision by a factor of ˜1.3 with respect to a pure white noise regime. Based on a detailed stellar variability analysis (i) we detected no transit-like events (an expected result, given the sample size); (ii) we determined photometric rotation periods of ˜0.47 and ˜0.22 d for LHS 3445 and GJ 1167A, respectively; (iii) these values agree with the large projected rotational velocities (˜25 and ˜33 km s-1, respectively) inferred for both stars based on the analysis of archival spectra; (iv) the estimated inclinations of the stellar rotation axes for LHS 3445 and GJ 1167A are consistent with those derived using a simple spot model; and (v) short-term, low-amplitude flaring events were recorded for LHS 3445 and LHS 2686. Finally, based on simulations of transit signals of given period and amplitude injected in the actual (nightly reduced) photometric data for our sample, we derive a relationship between transit detection probability and phase coverage. We find that, using the Box-fitting Least Squares search algorithm, even when the phase coverage approaches 100 per cent, there is a limit to the detection probability of ≈90 per cent. Around programme stars with phase coverage > 50 per cent, we would have had >80 per cent chances of detecting planets with P < 1 d inducing fractional transit depths > 0.5 per cent, corresponding to minimum detectable radii in the range ˜1.0-2.2 R⊕. These findings are illustrative of our high readiness level ahead of the main survey start.

  2. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.

    1987-01-01

    The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.

  3. Einstein-Podolsky-Rosen steering: Its geometric quantification and witness

    NASA Astrophysics Data System (ADS)

    Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco

    2018-02-01

    We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.

  4. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    NASA Astrophysics Data System (ADS)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.

  5. Hypnotizability, Hypnosis and Prepulse Inhibition of the Startle Reflex in Healthy Women: An ERP Analysis

    PubMed Central

    De Pascalis, Vilfredo; Russo, Emanuela

    2013-01-01

    A working model of the neurophysiology of hypnosis suggests that highly hypnotizable individuals (HHs) have more effective frontal attentional systems implementing control, monitoring performance, and inhibiting unwanted stimuli from conscious awareness, than low hypnotizable individuals (LHs). Recent studies, using prepulse inhibition (PPI) of the auditory startle reflex (ASR), suggest that HHs, in the waking condition, may show reduced sensory gating although they may selectively attend and disattend different stimuli. Using a within subject design and a strict subject selection procedure, in waking and hypnosis conditions we tested whether HHs compared to LHs showed a significantly lower inhibition of the ASR and startle-related brain activity in both time and intracerebral source localization domains. HHs, as compared to LH participants, exhibited (a) longer latency of the eyeblink startle reflex, (b) reduced N100 responses to startle stimuli, and (c) higher PPI of eyeblink startle and of the P200 and P300 waves. Hypnosis yielded smaller N100 waves to startle stimuli and greater PPI of this component than in the waking condition. sLORETA analysis revealed that, for the N100 (107 msec) elicited during startle trials, HHs had a smaller activation in the left parietal lobe (BA2/40) than LHs. Auditory pulses of pulse-with prepulse trials in HHs yielded less activity of the P300 (280 msec) wave than LHs, in the cingulate and posterior cingulate gyrus (BA23/31). The present results, on the whole, are in the opposite direction to PPI findings on hypnotizability previously reported in the literature. These results provide support to the neuropsychophysiological model that HHs have more effective sensory integration and gating (or filtering) of irrelevant stimuli than LHs. PMID:24278150

  6. Hypnotizability, hypnosis and prepulse inhibition of the startle reflex in healthy women: an ERP analysis.

    PubMed

    De Pascalis, Vilfredo; Russo, Emanuela

    2013-01-01

    A working model of the neurophysiology of hypnosis suggests that highly hypnotizable individuals (HHs) have more effective frontal attentional systems implementing control, monitoring performance, and inhibiting unwanted stimuli from conscious awareness, than low hypnotizable individuals (LHs). Recent studies, using prepulse inhibition (PPI) of the auditory startle reflex (ASR), suggest that HHs, in the waking condition, may show reduced sensory gating although they may selectively attend and disattend different stimuli. Using a within subject design and a strict subject selection procedure, in waking and hypnosis conditions we tested whether HHs compared to LHs showed a significantly lower inhibition of the ASR and startle-related brain activity in both time and intracerebral source localization domains. HHs, as compared to LH participants, exhibited (a) longer latency of the eyeblink startle reflex, (b) reduced N100 responses to startle stimuli, and (c) higher PPI of eyeblink startle and of the P200 and P300 waves. Hypnosis yielded smaller N100 waves to startle stimuli and greater PPI of this component than in the waking condition. sLORETA analysis revealed that, for the N100 (107 msec) elicited during startle trials, HHs had a smaller activation in the left parietal lobe (BA2/40) than LHs. Auditory pulses of pulse-with prepulse trials in HHs yielded less activity of the P300 (280 msec) wave than LHs, in the cingulate and posterior cingulate gyrus (BA23/31). The present results, on the whole, are in the opposite direction to PPI findings on hypnotizability previously reported in the literature. These results provide support to the neuropsychophysiological model that HHs have more effective sensory integration and gating (or filtering) of irrelevant stimuli than LHs.

  7. Treatment of Laugier-Hunziker syndrome with the Q-switched alexandrite laser in 22 Chinese patients.

    PubMed

    Zuo, Ya-Gang; Ma, Dong-Lai; Jin, Hong-Zhong; Liu, Yue-Hua; Wang, Hong-Wei; Sun, Qiu-Ning

    2010-03-01

    Laugier-Hunziker syndrome (LHS), a rare, acquired pigmentary disorder of the lips, oral mucosa, and fingers, is known to be an entirely benign disease with no systemic manifestations. In the past, the pigmentation has been treated efficiently in a few patients with the Q-switched neodymium: yttrium-aluminum-garnet (Nd:YAG) laser and the Q-switched alexandrite laser (QSAL). In order to evaluate the efficacy and safety of QSAL on Chinese patients of LHS, we treated 22 patients with QSAL in the past 5 years. Treatments were delivered on a bimonthly or trimonthly basis until the abnormal pigmentation totally disappeared. Patients were evaluated at each visit for evidence of dyspigmentation, scarring, or other untoward effects from the laser treatment. Our 22 subjects consisted of 18 females and 4 males with a mean age of 42.4 years. After only one session of laser treatment, the clearing on the lips was as follow: 18 (81.8%) excellent, 2 (9.1%) good, 1 (4.5%) fair and 1 (4.5%) poor. Eighteen patients (81.8%) with LHS, who had achieved excellent clearing after only one session of laser treatment, did not receive further treatment. Among the left four patients, three patients (13.6%) achieved complete results after three laser treatments. Only one patient required six sessions to achieve complete clearance. No scarring was noted after any of the treatments. The appearance of pigmentation on mucous membranes in a middle-aged patient without a significant family history for skin disorders should prompt consideration for the possible diagnosis of LHS. Our study has also demonstrated QSAL to be highly effective and safe in the treatment of LHS.

  8. NFE2L2 pathway polymorphisms and lung function decline in chronic obstructive pulmonary disease

    PubMed Central

    Malhotra, Deepti; Boezen, H. Marike; Siedlinski, Mateusz; Postma, Dirkje S.; Wong, Vivien; Akhabir, Loubna; He, Jian-Qing; Connett, John E.; Anthonisen, Nicholas R.; Paré, Peter D.; Biswal, Shyam

    2012-01-01

    An oxidant-antioxidant imbalance in the lung contributes to the development of chronic obstructive pulmonary disease (COPD) that is caused by a complex interaction of genetic and environmental risk factors. Nuclear erythroid 2-related factor 2 (NFE2L2 or NRF2) is a critical molecule in the lung's defense mechanism against oxidants. We investigated whether polymorphisms in the NFE2L2 pathway affected the rate of decline of lung function in smokers from the Lung Health Study (LHS)(n = 547) and in a replication set, the Vlagtwedde-Vlaardingen cohort (n = 533). We selected polymorphisms in NFE2L2 in genes that positively or negatively regulate NFE2L2 transcriptional activity and in genes that are regulated by NFE2L2. Polymorphisms in 11 genes were significantly associated with rate of lung function decline in the LHS. One of these polymorphisms, rs11085735 in the KEAP1 gene, was previously shown to be associated with the level of lung function in the Vlagtwedde-Vlaardingen cohort but not with decline of lung function. Of the 23 associated polymorphisms in the LHS, only rs634534 in the FOSL1 gene showed a significant association in the Vlagtwedde-Vlaardingen cohort with rate of lung function decline, but the direction of the association was not consistent with that in the LHS. In summary, despite finding several nominally significant polymorphisms in the LHS, none of these associations were replicated in the Vlagtwedde-Vlaardingen cohort, indicating lack of effect of polymorphisms in the NFE2L2 pathway on the rate of decline of lung function. PMID:22693272

  9. The mechanisms underlying the muscle metaboreflex modulation of sweating and cutaneous blood flow in passively heated humans.

    PubMed

    Haqani, Baies; Fujii, Naoto; Kondo, Narihiko; Kenny, Glen P

    2017-02-01

    Metaboreceptors can modulate cutaneous blood flow and sweating during heat stress but the mechanisms remain unknown. Fourteen participants (31 ± 13 years) performed 1-min bout of isometric handgrip (IHG) exercise at 60% of their maximal voluntary contraction followed by a 3-min occlusion (OCC), each separated by 10 min, initially under low (LHS, to activate sweating without changes in core temperature) and high (HHS, whole-body heating to a core temperature increase of 1.0°C) heat stress conditions. Cutaneous vascular conductance (CVC) and sweat rate were measured continuously at four forearm skin sites perfused with 1) lactated Ringer's solution (Control), 2) 10 mmol L-NAME [inhibits nitric oxide synthase (NOS)], 3) 10 mmol Ketorolac [inhibits cyclooxygenase (COX)], or 4) 4 mmol theophylline (THEO; inhibits adenosine receptors). Relative to pre-IHG levels with Control, NOS inhibition attenuated the metaboreceptor-mediated increase in sweating under LHS and HHS ( P  ≤ 0.05), albeit the attenuation was greater under LHS ( P  ≤ 0.05). In addition, a reduction from baseline was observed with THEO under LHS during OCC ( P  ≤ 0.05), but not HHS (both P  > 0.05). In contrast, CVC was lower than Control with L-NAME during OCC in HHS ( P  ≤ 0.05), but not LHS ( P  > 0.05). We show that metaboreceptor activation modulates CVC via the stimulation of NOS and adenosine receptors, whereas NOS, but not COX or adenosine receptors, contributes to sweating at all levels of heating. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  10. Dynamic locking screw improves fixation strength in osteoporotic bone: an in vitro study on an artificial bone model.

    PubMed

    Pohlemann, Tim; Gueorguiev, Boyko; Agarwal, Yash; Wahl, Dieter; Sprecher, Christoph; Schwieger, Karsten; Lenz, Mark

    2015-04-01

    The novel dynamic locking screw (DLS) was developed to improve bone healing with locked-plate osteosynthesis by equalising construct stiffness at both cortices. Due to a theoretical damping effect, this modulated stiffness could be beneficial for fracture fixation in osteoporotic bone. Therefore, the mechanical behaviour of the DLS at the screw-bone interface was investigated in an artificial osteoporotic bone model and compared with conventional locking screws (LHS). Osteoporotic surrogate bones were plated with either a DLS or a LHS construct consisting of two screws and cyclically axially loaded (8,500 cycles, amplitude 420 N, increase 2 mN/cycle). Construct stiffness, relative movement, axial screw migration, proximal (P) and distal (D) screw pullout force and loosening at the bone interface were determined and statistically evaluated. DLS constructs exhibited a higher screw pullout force of P 85 N [standard deviation (SD) 21] and D 93 N (SD 12) compared with LHS (P 62 N, SD 28, p = 0.1; D 57 N, SD 25, p < 0.01) and a significantly lower axial migration over cycles compared with LHS (p = 0.01). DLS constructs showed significantly lower axial construct stiffness (403 N/mm, SD 21, p < 0.01) and a significantly higher relative movement (1.1 mm, SD 0.05, p < 0.01) compared with LHS (529 N/mm, SD 27; 0.8 mm, SD 0.04). Based on the model data, the DLS principle might also improve in vivo plate fixation in osteoporotic bone, providing enhanced residual holding strength and reducing screw cutout. The influence of pin-sleeve abutment still needs to be investigated.

  11. The impact of high-heeled shoes on ankle complex during walking in young women-In vivo kinematic study based on 3D to 2D registration technique.

    PubMed

    Wang, Chen; Geng, Xiang; Wang, Shaobai; Ma, Xin; Wang, Xu; Huang, Jiazhang; Zhang, Chao; Chen, Li; Yang, Junsheng; Li, Jiabei; Wang, Kan

    2016-06-01

    To explore the accurate in vivo kinematic changes in the ankle complex when wearing low- and high-heel shoes (LHS and HHS, respectively). Twelve young women were tested unilaterally. Three-dimensional models of the tibia, talus, and calcaneus were first created based on CT scan results. The subjects walked at a self-controlled speed in barefoot, LHS (4cm), and HHS (10cm) conditions. A fluoroscopy system captured the lateral fluoroscopic images of the ankle complex. The images of seven key positions in the stance phase were selected, and 3D to 2D bone model registrations were performed to determine the joint positions. The mean of 6 degree of freedom (DOF) range of motions (ROM), joint positions, and angular displacements of the ankle complex during the gait were then obtained. For the talocrural joint, the rotational ROMs of the subjects either in LHS or HHS condition displayed no significant difference from those in barefoot condition. For the subtalar joint, all the rotational ROMs in the HHS condition and the internal/external rotations in the LHS condition significantly decreased compared with those in the barefoot condition. The talocrural joint was positioned significantly more plantarflexed, inverted, internally rotated, and posteriorly seated in all seven poses in HHS condition, compared with those in barefoot condition. HHS mainly affected the rotational motion of the ankle complex during walking. The talocrural joint position was abnormal, and the subtalar joint ROM decreased during the gait in HHS condition. Only a few kinematic changes occurred in LHS condition relative to the barefoot condition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Factors Associated with a Prolonged Length of Hospital Stay in Patients with Diabetic Foot: A Single-Center Retrospective Study.

    PubMed

    Choi, Sang Kyu; Kim, Cheol Keun; Jo, Dong In; Lee, Myung Chul; Kim, Jee Nam; Choi, Hyun Gon; Shin, Dong Hyeok; Kim, Soon Heum

    2017-11-01

    We conducted this study to identify factors that may prolong the length of the hospital stay (LHS) in patients with diabetic foot (DF) in a single-institution setting. In this single-center retrospective study, we evaluated a total of 164 patients with DF, and conducted an intergroup comparison of their baseline demographic and clinical characteristics, including sex, age, duration of diabetes, smoking status, body mass index, underlying comorbidities (e.g., hypertension or diabetic nephropathy), wound characteristics,type of surgery, the total medical cost, white blood cell (WBC) count, C-reactive protein (CRP) levels, erythrocyte sedimentation rate, and albumin, protein, glycated hemoglobin, and 7-day mean blood glucose (BG) levels. Pearson correlation analysis showed that an LHS of >5 weeks had a significant positive correlation with the severity of the wound (r=0.647), WBC count (r=0.571), CRP levels (r=0.390), DN (r=0.020), and 7-day mean BG levels (r=0.120) (P<0.05). In multiple regression analysis, an LHS of >5 weeks had a significant positive correlation with the severity of the wound (odds ratio [OR]=3.297; 95% confidence interval [CI], 1.324-10.483; P=0.020), WBC count (OR=1.423; 95% CI, 0.046-0.356; P=0.000), CRP levels (OR=1.079; 95% CI, 1.015-1.147; P=0.014), albumin levels (OR=0.263; 95% CI, 0.113-3.673; P=0.007), and 7-day mean BG levels (OR=1.018; 95% CI, 1.001-1.035; P=0.020). Surgeons should consider the factors associated with a prolonged LHS in the early management of patients with DF. Moreover, this should also be accompanied by a multidisciplinary approach to reducing the LHS.

  13. NFE2L2 pathway polymorphisms and lung function decline in chronic obstructive pulmonary disease.

    PubMed

    Sandford, Andrew J; Malhotra, Deepti; Boezen, H Marike; Siedlinski, Mateusz; Postma, Dirkje S; Wong, Vivien; Akhabir, Loubna; He, Jian-Qing; Connett, John E; Anthonisen, Nicholas R; Paré, Peter D; Biswal, Shyam

    2012-08-01

    An oxidant-antioxidant imbalance in the lung contributes to the development of chronic obstructive pulmonary disease (COPD) that is caused by a complex interaction of genetic and environmental risk factors. Nuclear erythroid 2-related factor 2 (NFE2L2 or NRF2) is a critical molecule in the lung's defense mechanism against oxidants. We investigated whether polymorphisms in the NFE2L2 pathway affected the rate of decline of lung function in smokers from the Lung Health Study (LHS)(n = 547) and in a replication set, the Vlagtwedde-Vlaardingen cohort (n = 533). We selected polymorphisms in NFE2L2 in genes that positively or negatively regulate NFE2L2 transcriptional activity and in genes that are regulated by NFE2L2. Polymorphisms in 11 genes were significantly associated with rate of lung function decline in the LHS. One of these polymorphisms, rs11085735 in the KEAP1 gene, was previously shown to be associated with the level of lung function in the Vlagtwedde-Vlaardingen cohort but not with decline of lung function. Of the 23 associated polymorphisms in the LHS, only rs634534 in the FOSL1 gene showed a significant association in the Vlagtwedde-Vlaardingen cohort with rate of lung function decline, but the direction of the association was not consistent with that in the LHS. In summary, despite finding several nominally significant polymorphisms in the LHS, none of these associations were replicated in the Vlagtwedde-Vlaardingen cohort, indicating lack of effect of polymorphisms in the NFE2L2 pathway on the rate of decline of lung function.

  14. Optimal cube-connected cube multiprocessors

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Wu, Jie

    1993-01-01

    Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.

  15. Using Big Data Analytics to Advance Precision Radiation Oncology.

    PubMed

    McNutt, Todd R; Benedict, Stanley H; Low, Daniel A; Moore, Kevin; Shpitser, Ilya; Jiang, Wei; Lakshminarayanan, Pranav; Cheng, Zhi; Han, Peijin; Hui, Xuan; Nakatsugawa, Minoru; Lee, Junghoon; Moore, Joseph A; Robertson, Scott P; Shah, Veeraj; Taylor, Russ; Quon, Harry; Wong, John; DeWeese, Theodore

    2018-06-01

    Big clinical data analytics as a primary component of precision medicine is discussed, identifying where these emerging tools fit in the spectrum of genomics and radiomics research. A learning health system (LHS) is conceptualized that uses clinically acquired data with machine learning to advance the initiatives of precision medicine. The LHS is comprehensive and can be used for clinical decision support, discovery, and hypothesis derivation. These developing uses can positively impact the ultimate management and therapeutic course for patients. The conceptual model for each use of clinical data, however, is different, and an overview of the implications is discussed. With advancements in technologies and culture to improve the efficiency, accuracy, and breadth of measurements of the patient condition, the concept of an LHS may be realized in precision radiation therapy. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Design lateral heterostructure of monolayer ZrS2 and HfS2 from first principles calculations

    NASA Astrophysics Data System (ADS)

    Yuan, Junhui; Yu, Niannian; Wang, Jiafu; Xue, Kan-Hao; Miao, Xiangshui

    2018-04-01

    The successful fabrication of two-dimensional lateral heterostructures (LHS's) has opened up unprecedented opportunities in material science and device physics. It is therefore highly desirable to search for more suitable materials to create such heterostructures for next-generation devices. Here, we investigate a novel lateral heterostructure composed of monolayer ZrS2 and HfS2 based on density functional theory. The phonon dispersion and ab initio molecular dynamics analysis indicate its good kinetic and thermodynamic stability. Remarkably, we find that these lateral heterostructures exhibit an indirect to direct bandgap transition, in contrast to the intrinsic indirect bandgap nature of ZrS2 and HfS2. The type-II alignment and chemical bonding across the interline have also been revealed. The tensile strain is proved to be an efficient way to modulate the band structure. Finally, we further discuss other three stable lateral heterostructures: (ZrSe2)2(HfSe2)2 LHS, (ZrS2)2(ZrSe2)2 LHS and (HfS2)2(HfSe2)2 LHS. Generally, the lateral heterostructures of monolayer ZrS2 and HfS2 are of excellent electrical properties, and may find potential applications for future electronic devices.

  17. Characterizing the Cool KOIs. VII. Refined Physical Properties of the Transiting Brown Dwarf LHS 6343 C

    NASA Astrophysics Data System (ADS)

    Montet, Benjamin T.; Johnson, John Asher; Muirhead, Philip S.; Villar, Ashley; Vassallo, Corinne; Baranec, Christoph; Law, Nicholas M.; Riddle, Reed; Marcy, Geoffrey W.; Howard, Andrew W.; Isaacson, Howard

    2015-02-01

    We present an updated analysis of LHS 6343, a triple system in the Kepler field which consists of a brown dwarf transiting one member of a widely separated M+M binary system. By analyzing the full Kepler data set and 34 Keck/HIgh Resolution Echelle Spectrometer radial velocity observations, we measure both the observed transit depth and Doppler semiamplitude to 0.5% precision. With Robo-AO and Palomar/PHARO adaptive optics imaging as well as TripleSpec spectroscopy, we measure a model-dependent mass for LHS 6343 C of 62.1 ± 1.2 M Jup and a radius of 0.783 ± 0.011 R Jup. We detect the secondary eclipse in the Kepler data at 3.5σ, measuring ecos ω = 0.0228 ± 0.0008. We also derive a method to measure the mass and radius of a star and transiting companion directly, without any direct reliance on stellar models. The mass and radius of both objects depend only on the orbital period, stellar density, reduced semimajor axis, Doppler semiamplitude, eccentricity, and inclination, as well as the knowledge that the primary star falls on the main sequence. With this method, we calculate a mass and radius for LHS 6343 C to a precision of 3% and 2%, respectively.

  18. Pre-Himalayan tectono-magmatic imprints in the Darjeeling-Sikkim Himalaya (DSH) constrained by 40Ar/39Ar dating of muscovite

    NASA Astrophysics Data System (ADS)

    Acharyya, Subhrangsu K.; Ghosh, Subhajit; Mandal, Nibir; Bose, Santanu; Pande, Kanchan

    2017-09-01

    The Lower Lesser Himalayan Sequence (L-LHS) in Darjeeling-Sikkim Himalaya (DSH) displays intensely deformed, low-grade meta-sedimentary rocks, frequently intervened by granite intrusives of varied scales. The principal motivation of our present study is to constrain the timing of this granitic event. Using 40Ar/39Ar geochronology, we dated muscovite from pegmatites emplaced along the earliest fabric in the low grade Daling phyllite, and obtained ∼1850 Ma Ar-Ar muscovite cooling age, which is broadly coeval with crystallization ages of Lingtse granite protolith (e.g., 1800-1850 Ma U-Pb zircon ages) reported from the L-LHS. We present here field observations to show the imprints (tectonic fabrics) of multiple ductile deformation episodes in the LHS terrain. The earliest penetrative fabric, axial planar to N-S trending reclined folds, suggest a regional tectonic event in the DSH prior to the active phase of Indo-Asia collision. Based on the age of granitic bodies and their structural correlation with the earliest fabric, we propose that the L-LHS as a distinct convergent tectono-magmatic belt, delineating the northern margin of Indian craton in the framework of the ∼1850 Ma Columbia supercontinent assembly.

  19. High-speed mid-infrared hyperspectral imaging using quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Kelley, David B.; Goyal, Anish K.; Zhu, Ninghui; Wood, Derek A.; Myers, Travis R.; Kotidis, Petros; Murphy, Cara; Georgan, Chelsea; Raz, Gil; Maulini, Richard; Müller, Antoine

    2017-05-01

    We report on a standoff chemical detection system using widely tunable external-cavity quantum cascade lasers (ECQCLs) to illuminate target surfaces in the mid infrared (λ = 7.4 - 10.5 μm). Hyperspectral images (hypercubes) are acquired by synchronously operating the EC-QCLs with a LN2-cooled HgCdTe camera. The use of rapidly tunable lasers and a high-frame-rate camera enables the capture of hypercubes with 128 x 128 pixels and >100 wavelengths in <0.1 s. Furthermore, raster scanning of the laser illumination allowed imaging of a 100-cm2 area at 5-m standoff. Raw hypercubes are post-processed to generate a hypercube that represents the surface reflectance relative to that of a diffuse reflectance standard. Results will be shown for liquids (e.g., silicone oil) and solid particles (e.g., caffeine, acetaminophen) on a variety of surfaces (e.g., aluminum, plastic, glass). Signature spectra are obtained for particulate loadings of RDX on glass of <1 μg/cm2.

  20. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  1. Serum PARC/CCL-18 concentrations and health outcomes in chronic obstructive pulmonary disease.

    PubMed

    Sin, Don D; Miller, Bruce E; Duvoix, Annelyse; Man, S F Paul; Zhang, Xuekui; Silverman, Edwin K; Connett, John E; Anthonisen, Nicholas A; Wise, Robert A; Tashkin, Donald; Celli, Bartolome R; Edwards, Lisa D; Locantore, Nicholas; Macnee, William; Tal-Singer, Ruth; Lomas, David A

    2011-05-01

    There are no accepted blood-based biomarkers in chronic obstructive pulmonary disease (COPD). Pulmonary and activation-regulated chemokine (PARC/CCL-18) is a lung-predominant inflammatory protein that is found in serum. To determine whether PARC/CCL-18 levels are elevated and modifiable in COPD and to determine their relationship to clinical end points of hospitalization and mortality. PARC/CCL-18 was measured in serum samples from individuals who participated in the ECLIPSE (Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints) and LHS (Lung Health Study) studies and a prednisolone intervention study. Serum PARC/CCL-18 levels were higher in subjects with COPD than in smokers or lifetime nonsmokers without COPD (105 vs. 81 vs. 80 ng/ml, respectively; P < 0.0001). Elevated PARC/CCL-18 levels were associated with increased risk of cardiovascular hospitalization or mortality in the LHS cohort and with total mortality in the ECLIPSE cohort. Serum PARC/CCL-18 levels are elevated in COPD and track clinical outcomes. PARC/CCL-18, a lung-predominant chemokine, could be a useful blood biomarker in COPD.

  2. Incorporating Language Structure in a Communicative Task: An Analysis of the Language Component of a Communicative Task in the LINC Home Study Program

    ERIC Educational Resources Information Center

    Lenchuk, Iryna

    2014-01-01

    The purpose of this article is to analyze a task included in the LINC Home Study (LHS) program. LHS is a federally funded distance education program offered to newcomers to Canada who are unable to attend regular LINC classes. A task, in which a language structure (a gerund) is chosen and analyzed, was selected from one instructional module of LHS…

  3. A magnetic model for low/hard state of black hole binaries

    NASA Astrophysics Data System (ADS)

    Ye, Yong-Chun; Wang, Ding-Xiong; Huang, Chang-Yin; Cao, Xiao-Feng

    2016-03-01

    A magnetic model for the low/hard state (LHS) of two black hole X-ray binaries (BHXBs), H1743-322 and GX 339-4, is proposed based on transport of the magnetic field from a companion into an accretion disk around a black hole (BH). This model consists of a truncated thin disk with an inner advection-dominated accretion flow (ADAF). The spectral profiles of the sources are fitted in agreement with the data observed at four different dates corresponding to the rising phase of the LHS. In addition, the association of the LHS with a quasi-steady jet is modeled based on transport of magnetic field, where the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes are invoked to drive the jets from BH and inner ADAF. It turns out that the steep radio/X-ray correlations observed in H1743-322 and GX 339-4 can be interpreted based on our model.

  4. Experimental demonstration of the optical multi-mesh hypercube: scaleable interconnection network for multiprocessors and multicomputers.

    PubMed

    Louri, A; Furlonge, S; Neocleous, C

    1996-12-10

    A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.

  5. Associations of IL6 polymorphisms with lung function decline and COPD

    PubMed Central

    He, Jian-Qing; Foreman, Marilyn G.; Shumansky, Karey; Zhang, Xuekui; Akhabir, Loubna; Sin, Don D; Man, S F Paul; DeMeo, Dawn L.; Litonjua, Augusto A.; Silverman, Edwin K.; Connett, John E; Anthonisen, Nicholas R; Wise, Robert A; Paré, Peter D; Sandford, Andrew J

    2010-01-01

    Background Interleukin-6 (IL6) is a pleiotropic pro-inflammatory and immunomodulatory cytokine which likely plays an important role in the pathogenesis of COPD. There is a functional single nucleotide polymorphism (SNP), −174G/C, in the promoter region of IL6. We hypothesized that IL6 SNPs influence susceptibility for impaired lung function and COPD in smokers. Methods Seven and 5 SNPs in IL6 were genotyped in two nested case-control samples derived from the Lung Health Study (LHS) based on phenotypes of rate of decline of forced expiratory volume in one second (FEV1) over 5 years and baseline FEV1 at the beginning of the LHS. Serum IL6 concentrations were measured for all subjects. A partially overlapping panel of 9 IL6 SNPs was genotyped in 389 COPD cases from the National Emphysema Treatment Trial (NETT) and 420 controls from the Normative Aging Study (NAS). Results In the LHS, three IL6 SNPs were associated with FEV1 decline (0.023 ≤ P ≤ 0.041 in additive models). Among them the IL6_−174C allele was associated with rapid decline of lung function. The association was more significant in a genotype-based analysis (P = 0.006). In the NETT-NAS study, IL6_−174G/C and four other IL6 SNPs, all of which are in linkage disequilibrium with IL6_−174G/C, were associated with susceptibility to COPD (0.01 ≤ P ≤ 0.04 in additive genetic models). Conclusion Our results suggest that the IL6_−174G/C SNP is associated with rapid decline of FEV1 and susceptibility to COPD in smokers. PMID:19359268

  6. ­Oligo-Miocene Monazite Ages in the Lesser Himalaya Sequence, Arunachal Pradesh, India; Geological Content of Age Variations

    NASA Astrophysics Data System (ADS)

    Clarke, G. L.; Bhowmik, S. K.; Ireland, T. R.; Aitchison, J. C.; Chapman, S. L.; Kent, L.

    2016-12-01

    A telescoped and inverted greenschist-upper amphibolite facies sequence in the in the Siyom Valley of eastern Arunachal Pradesh is tectonically overlain by an upright (grade decreasing upward) granulite to lower amphibolite facies sequence. Such grade relationships would normally attribute the boundary to a Main Central Thrust (MCT) structure, and predict a change from underlying Lesser Himalaya Sequence (LHS) to Greater Himalaya Sequence rocks across the boundary. However, all pelitic and psammitic samples have similar detrital zircon age spectra, involving c. 2500, 1750-1500, 1200 and 1000 Ma Gondwanan populations correlated with the LHS. Isograds are broadly parallel to a penetrative NW-dipping S2 foliation, developed contemporaneously with the inversion. Garnet growth in garnet, staurolite and kyanite zone schists beneath the thrust commenced at P>8 kbar and T≈550°C, before syn- to post-S2 heating of staurolite and kyanite zone rocks to T≈640°C at P≈8.5 kbar, most probably at c. 18.5 Ma. Kyanite-rutile-garnet migmatite immediately above the thrust records peak conditions of P≈10 kbar and T≈750°C and c. 21.5 Ma monazite ages. Complexity in c. 21-1000 Ma monazite ages in overlying amphibolite facies schists reflects the patchy recrystallization of detrital grains, intra-grain complexity being dependent on whole rock composition, metamorphic grade and evolition. Slip on a SE-propagating thrust was likely contemporaneous with early Miocene metamorphism, based on the distribution of structure, metamorphic textures, and overlap of age relationships. It is inferred to have initially controlled the uplift of granulite to mid-crustal levels between 22 and 19 Ma, thermal relaxation within a disrupted LHS metamorphic profile inducing a post-S2 thermal peak in lower grade footwall rocks.

  7. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  8. A Search for Exomoons and TTVs from LHS 1140b, a nearby super-Earth orbiting in the habitable-zone of an M dwarf

    NASA Astrophysics Data System (ADS)

    Dittmann, Jason; Charbonneau, David; Irwin, Jonathan; Agol, Eric; Kipping, David; Newton, Elisabeth; Berta-Thompson, Zachory; Haywood, Raphaelle; Winters, Jennifer; Ballard, Sarah

    2017-06-01

    Exoplanets that transit nearby small stars present the best opportunity for future atmospheric studies with the James Webb Space Telescope and the ground based ELTs currently under construction. The MEarth Project has discovered a rocky planet with a period of 27.43 days residing in the habitable zone of the nearby inactive star LHS 1140. This planet will be the subject of GTO observations by JWST, and additional objects in the system would also be tantalizing targets for future study. Owing to the large planetary mass and orbital separation from its star, LHS 1140b is unique among the planets known to transit nearby M dwarfs in its capability to host a large moon. We propose to survey LHS 1140b for signs of exomoons and to search for transit timing variations that may indicate the presence of additional companions. The long orbital period of 25 days, the 12 hour duration for the transit of the Hill sphere, and the small amplitude of the expected signal preclude pursuing this from the ground and make Spitzer uniquely capable to undertake this study. If successful, we may discover additional planets via TTVs for which we may conduct future searches for transits and atmospheric spectroscopy with JWST, and possibly provide the first evidence for exomoons outside of the Solar System.

  9. Dermoscopic findings and histological correlation of the acral volar pigmented maculae in Laugier-Hunziker syndrome.

    PubMed

    Sendagorta, Elena; Feito, Marta; Ramírez, Paloma; Gonzalez-Beato, María; Saida, Toshiaki; Pizarro, Angel

    2010-11-01

    Laugier-Hunziker syndrome (LHS) is an acquired, benign, macular hyperpigmentation of the lips and oral mucosa, often associated with pigmentation of the nails. Volar acral maculae on the palms and fingertips of patients affected by LHS are a typical feature of this rare entity. Dermoscopic examination of these maculae has been described in a previous report, in which authors found a parallel-furrow pattern. We describe two cases in which a parallel-ridge pattern (PRP) was found on the dermoscopic examination of the pigmented acral lesions. Histological examination showed increased melanin in basal keratinocytes, which was most prominent in those located at the crista intermedia profunda, that is, in the epidermal rete ridges underlying the surface ridges. In our study, dermoscopic features of the pigmented maculae found on LHS differed from those previously described. In addition, by means of this case report, the histological features of these lesions are described for the first time, showing an excellent correlation with dermoscopy. The reported cases prove that although the PRP is very specific of melanoma, it is also possible to find it in benign lesions. Therefore, we must be familiar with the differential diagnosis of PRP, and take into consideration the clinical context in which we find it. Further studies are needed to increase our knowledge on the histological and dermoscopic features of acral pigmented maculae of LHS. © 2010 Japanese Dermatological Association.

  10. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  11. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  12. Multiphase complete exchange on Paragon, SP2 and CS-2

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    The overhead of interprocessor communication is a major factor in limiting the performance of parallel computer systems. The complete exchange is the severest communication pattern in that it requires each processor to send a distinct message to every other processor. This pattern is at the heart of many important parallel applications. On hypercubes, multiphase complete exchange has been developed and shown to provide optimal performance over varying message sizes. Most commercial multicomputer systems do not have a hypercube interconnect. However, they use special purpose hardware and dedicated communication processors to achieve very high performance communication and can be made to emulate the hypercube quite well. Multiphase complete exchange has been implemented on three contemporary parallel architectures: the Intel Paragon, IBM SP2 and Meiko CS-2. The essential features of these machines are described and their basic interprocessor communication overheads are discussed. The performance of multiphase complete exchange is evaluated on each machine. It is shown that the theoretical ideas developed for hypercubes are also applicable in practice to these machines and that multiphase complete exchange can lead to major savings in execution time over traditional solutions.

  13. Characteristics of patients in a ward of Academic Internal Medicine: implications for medical care, training programmes and research.

    PubMed

    Becchi, Maria Angela; Pescetelli, Michele; Caiti, Omar; Carulli, Nicola

    2010-06-01

    To describe the characteristics of "delayed discharge patients" and the factors associated with "delayed discharges", we performed a 12-month observational study on patients classified as "delayed discharge patients" admitted to an Academic Internal Medicine ward. We assessed the demographic variables, the number and severity of diseases using the Geriatric Index of Comorbidity (GIC), the cognitive, affective and functional status using, respectively, the Mini Mental Stare Examination, the Geriatric Depression Scale and the Barthel Index. We assessed the total length of stay (T-LHS), the total inappropriate length of stay (T-ILHS), the median length of stays (M-LHS), the median inappropriate length of stay (M-ILHS) and evaluated the factors associated with delayed discharge. "Delayed discharge patients" were 11.9% of all patients. The mean age was 81.9 years, 74.0% were in the IV class of GIC and 33.5% were at the some time totally dependent and affected by severe or non-assessable cognitive impairments. The patients had 2584 T-LHS, of which 1058 (40.9%) were T-ILHS. Their M-LHS was 15 days, and the M-ILHS was 5 days. In general, the greater the LHS, the greater is the ILHS (Spearman's rho + 0.68, P < 0.001). Using a multivariate analysis, only the absence of formal aids before hospitalisation is independently associated with delayed discharge (F = 4.39, P = 0.038). The majority of the delays (69%) resulted from the difficulty in finding beds in long-term hospital wards, but the longest M-ILHS (9 days) was found in patients waiting for the Geriatric Evaluation Unit. The profile of patients and the pattern of hospital utilisation suggest a need to reorient the health care system, and to develop appropriate resources for the academic functions of education, research and patient care.

  14. Kinetic Study of the Aroxyl-Radical-Scavenging Activity of Five Fatty Acid Esters and Six Carotenoids in Toluene Solution: Structure-Activity Relationship for the Hydrogen Abstraction Reaction.

    PubMed

    Mukai, Kazuo; Yoshimoto, Maya; Ishikura, Masaharu; Nagaoka, Shin-Ichi

    2017-08-17

    A kinetic study of the reaction between an aroxyl radical (ArO • ) and fatty acid esters (LHs 1-5, ethyl stearate 1, ethyl oleate 2, ethyl linoleate 3, ethyl linolenate 4, and ethyl arachidonate 5) has been undertaken. The second-order rate constants (k s ) for the reaction of ArO • with LHs 1-5 in toluene at 25.0 °C have been determined spectrophotometrically. The k s values obtained increased in the order of LH 1 < 2 < 3 < 4 < 5, that is, with increasing the number of double bonds included in LHs 1-5. The k s value for LH 5 was 2.93 × 10 -3 M -1 s -1 . From the result, it has been clarified that the reaction of ArO • with LHs 1-5 was explained by an allylic hydrogen abstraction reaction. A similar kinetic study was performed for the reaction of ArO • with six carotenoids (Car-Hs 1-6, astaxanthin 1, β-carotene 2, lycopene 3, capsanthin 4, zeaxanthin 5, and lutein 6). The k s values obtained increased in the order of Car-H 1 < 2 < 3 < 4 < 5 < 6. The k s value for Car-H 6 was 8.4 × 10 -4 M -1 s -1 . The k s values obtained for Car-Hs 1-6 are in the same order as that of the values for LHs 1-5. The results of detailed analyses of the k s values for the above reaction indicated that the reaction was also explained by an allylic hydrogen abstraction reaction. Furthermore, the structure-activity relationship for the reaction was discussed by taking the result of density functional theory calculation reported by Martinez and Barbosa into account.

  15. Factors Associated with a Prolonged Length of Hospital Stay in Patients with Diabetic Foot: A Single-Center Retrospective Study

    PubMed Central

    Choi, Sang Kyu; Kim, Cheol Keun; Jo, Dong In; Lee, Myung Chul; Kim, Jee Nam; Choi, Hyun Gon; Shin, Dong Hyeok; Kim, Soon Heum

    2017-01-01

    Background We conducted this study to identify factors that may prolong the length of the hospital stay (LHS) in patients with diabetic foot (DF) in a single-institution setting. Methods In this single-center retrospective study, we evaluated a total of 164 patients with DF, and conducted an intergroup comparison of their baseline demographic and clinical characteristics, including sex, age, duration of diabetes, smoking status, body mass index, underlying comorbidities (e.g., hypertension or diabetic nephropathy), wound characteristics,type of surgery, the total medical cost, white blood cell (WBC) count, C-reactive protein (CRP) levels, erythrocyte sedimentation rate, and albumin, protein, glycated hemoglobin, and 7-day mean blood glucose (BG) levels. Results Pearson correlation analysis showed that an LHS of >5 weeks had a significant positive correlation with the severity of the wound (r=0.647), WBC count (r=0.571), CRP levels (r=0.390), DN (r=0.020), and 7-day mean BG levels (r=0.120) (P<0.05). In multiple regression analysis, an LHS of >5 weeks had a significant positive correlation with the severity of the wound (odds ratio [OR]=3.297; 95% confidence interval [CI], 1.324–10.483; P=0.020), WBC count (OR=1.423; 95% CI, 0.046–0.356; P=0.000), CRP levels (OR=1.079; 95% CI, 1.015–1.147; P=0.014), albumin levels (OR=0.263; 95% CI, 0.113–3.673; P=0.007), and 7-day mean BG levels (OR=1.018; 95% CI, 1.001–1.035; P=0.020). Conclusions Surgeons should consider the factors associated with a prolonged LHS in the early management of patients with DF. Moreover, this should also be accompanied by a multidisciplinary approach to reducing the LHS. PMID:29121708

  16. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  17. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  18. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  19. The role of STK 11 gene testing in individuals with oral pigmentation.

    PubMed

    Duong, Bich-Thu; Winship, Ingrid

    2017-05-01

    Peutz-Jeghers syndrome (PJS) is a rare, autosomal dominant condition characterised by mucocutaneous pigmented lesions, gastrointestinal polyposis and a significant risk of cancer. Laugier-Hunziker syndrome (LHS) is a benign condition with similar dermatological features, but with no systemic complications. STK 11 gene testing allows clinicians to differentiate between these two disorders. This case report compares the dermatological similarities in four individuals with PJS or LHS and illustrates the potential benefit of genetic testing. There is > 90% likelihood of identifying a mutation in STK 11 if a patient fulfils the diagnostic criteria for PJS. Lifelong risk management is advised for these individuals with confirmed PJS. Diagnostic confirmation is important to provide rational management, in particular, endoscopic cancer surveillance, and psychological support. STK 11 testing can confirm those at risk of PJS, who require lifelong surveillance, and possibly release those with a simple dermatosis, such as LHS, from invasive and thus potentially harmful surveillance. © 2016 The Australasian College of Dermatologists.

  20. Syn-collisional felsic magmatism and continental crust growth: A case study from the North Qilian Orogenic Belt at the northern margin of the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Niu, Yaoling; Xue, Qiqi

    2018-05-01

    The abundant syn-collisional granitoids produced and preserved at the northern Tibetan Plateau margin provide a prime case for studying the felsic magmatism as well as continental crust growth in response to continental collision. Here we present the results from a systematic study of the syn-collisional granitoids and their mafic magmatic enclaves (MMEs) in the Laohushan (LHS) and Machangshan (MCS) plutons from the North Qilian Orogenic Belt (NQOB). Two types of MMEs from the LHS pluton exhibit identical crystallization age ( 430 Ma) and bulk-rock isotopic compositions to their host granitoids, indicating their genetic link. The phase equilibrium constraints and pressure estimates for amphiboles from the LHS pluton together with the whole rock data suggest that the two types of MMEs represent two evolution products of the same hydrous andesitic magmas. In combination with the data on NQOB syn-collisional granitoids elsewhere, we suggest that the syn-collisional granitoids in the NQOB are material evidence of melting of ocean crust and sediment. The remarkable compositional similarity between the LHS granitoids and the model bulk continental crust in terms of major elements, trace elements, and some key element ratios indicates that the syn-collisional magmatism in the NQOB contributes to net continental crust growth, and that the way of continental crust growth in the Phanerozoic through syn-collisional felsic magmatism (production and preservation) is a straightforward process without the need of petrologically and physically complex processes.

  1. Network Meta-Analysis Comparing the Efficacy of Therapeutic Treatments for Bronchiolitis in Children.

    PubMed

    Guo, Caili; Sun, Xiaomin; Wang, Xiaowen; Guo, Qing; Chen, Dan

    2018-01-01

    This study aims to compare placebo (PBO) and 7 therapeutic regimens-namely, bronchodilator agents (BAs), hypertonic saline (HS), BA ± HS, corticosteroids (CS), epinephrine (EP), EP ± CS, and EP ± HS-to determine the optimal bronchiolitis treatment. We plotted networks using the curative outcome of several studies and specified the relations among the experiments by using mean difference, standardized mean difference, and corresponding 95% credible interval. The surface under the cumulative ranking curve (SUCRA) was used to separately rank each therapy on clinical severity score (CSS) and length of hospital stay (LHS). This network meta-analysis included 40 articles from 1995 to 2016 concerning the treatment of bronchiolitis in children. All 7 therapeutic regimens displayed no significant difference to PBO with regard to CSS in our study. Among the 7 therapies, BA performed better than CS. As for LHS, EP and EP ± HS had an advantage over PBO. Moreover, EP and EP ± HS were also more efficient than BA. The SUCRA results showed that EP ± CS is most effective, and EP ± HS is second most effective with regard to CSS. With regard to LHS, EP ± HS ranked first, EP ± CS ranked second, and EP ranked third. We recommend EP ± CS and EP ± HS as the first choice for bronchiolitis treatment in children because of their outstanding performance with regard to CSS and LHS. © 2017 American Society for Parenteral and Enteral Nutrition.

  2. Low reversibility of intracellular cAMP accumulation in mouse Leydig tumor cells (MLTC-1) stimulated by human Luteinizing Hormone (hLH) and Chorionic Gonadotropin (hCG).

    PubMed

    Klett, Danièle; Meslin, Philippine; Relav, Lauriane; Nguyen, Thi Mong Diep; Mariot, Julie; Jégot, Gwenhaël; Cahoreau, Claire; Combarnous, Yves

    2016-10-15

    In order to study the intracellular cAMP response kinetics of Leydig cells to hormones with LH activity, we used MLTC-1 cells transiently expressing a chimeric cAMP-responsive luciferase so that real-time variations of intracellular cAMP concentration could be followed using oxiluciferin luminescence produced from catalyzed luciferin oxidation. The potencies of the different LHs and CGs were evaluated using areas under the curves (AUC) of their kinetics over 60 min stimulation. All mammalian LHs and CGs tested were found to stimulate cAMP accumulation in these cells. The reversibility of this stimulation was studied by removing the hormone from the culture medium after 10 min of incubation. The ratios of kinetics AUC after removing or not the hormone were used to evaluate the stimulation reversibility of each hormone. Natural and recombinant hLHs and hCGs were found to exhibit slowly reversible activation compared to pituitary rat, ovine, porcine, camel and equine LHs, serum-derived eCG (PMSG) and recombinant eLH/CGs. Carbohydrate side chains are not involved in this phenomenon since natural and recombinant homologous hormones exhibit the same reversibility rates. It is still unknown whether only one human subunit, α or β, is responsible for this behaviour or whether it is due to a particular feature of the hLH and hCG quaternary structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Essential roles of insulin, AMPK signaling and lysyl and prolyl hydroxylases in the biosynthesis and multimerization of adiponectin.

    PubMed

    Zhang, Lin; Li, Ming-Ming; Corcoran, Marie; Zhang, Shaoping; Cooper, Garth J S

    2015-01-05

    Post-translational modifications (PTMs) of the adiponectin molecule are essential for its full bioactivity, and defects in PTMs leading to its defective production and multimerization have been linked to the mechanisms of insulin resistance, obesity, and type-2 diabetes. Here we observed that, in differentiated 3T3-L1 adipocytes, decreased insulin signaling caused by blocking of insulin receptors (InsR) with an anti-InsR blocking antibody, increased rates of adiponectin secretion, whereas concomitant elevations in insulin levels counteracted this effect. Adenosine monophosphate-activated protein kinase (AMPK) signaling regulated adiponectin production by modulating the expression of adiponectin receptors, the secretion of adiponectin, and eventually the expression of adiponectin itself. We found that lysyl hydroxylases (LHs) and prolyl hydroxylases (PHs) were expressed in white-adipose tissue of ob/ob mice, wherein LH3 levels were increased compared with controls. In differentiated 3T3-L1 adipocytes, both non-specific inhibition of LHs and PHs by dipyridyl, and specific inhibition of LHs by minoxidil and of P4H with ethyl-3,4-dihydroxybenzoate, caused significant suppression of adiponectin production, more particularly of the higher-order isoforms. Transient gene knock-down of LH3 (Plod3) caused a suppressive effect, especially on the high molecular-weight (HMW) isoforms. These data indicate that PHs and LHs are both required for physiological adiponectin production and in particular are essential for the formation/secretion of the HMW isoforms. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Burial of thermally perturbed Lesser Himalayan mid-crust: Evidence from petrochemistry and P-T estimation of the western Arunachal Himalaya, India

    NASA Astrophysics Data System (ADS)

    Goswami-Banerjee, Sriparna; Bhowmik, Santanu Kumar; Dasgupta, Somnath; Pant, Naresh Chandra

    2014-11-01

    In this work, we establish a dual prograde P-T path of the Lesser Himalayan Sequence (LHS) rocks from the western Arunachal Himalaya (WAH). The investigated metagranites, garnet- and kyanite-zone metapelites of the LHS are part of an inverted metamorphic sequence (IMS) that is exposed on the footwall side of the Main Central Thrust (MCT). Integrated petrographic, mineral chemistry, geothermobarometric (conventional and isopleth intersection methods) and P-T pseudosection modeling studies reveal a near isobaric (at P ~ 8-9 kbar) peak Barrovian metamorphism with increase in TMax from ~ 560 °C in the metagranite through ~ 590-600 °C in the lower and middle garnet-zone to ~ 600-630 °C in the upper garnet- and kyanite-zone rocks. The metamorphic sequence of the LHS additionally records a pre-Barrovian near isobaric thermal gradient in the mid crust (at ~ 6 kbar) from ~ 515 °C (in the middle garnet zone) to ~ 560-580 °C (in the upper garnet- and kyanite zone, adjoining the Main Central Thrust). Further burial (along steep dP/dT gradient) to a uniform depth corresponding to ~ 8-9 kbar and prograde heating of the differentially heated LHS rocks led to the formation of near isobaric metamorphic field gradient in the Barrovian metamorphic zones of the WAH. A combined critical taper and channel flow model is presented to explain the inverted metamorphic zonation of the rocks of the WAH.

  5. Leadership Perspectives on Operationalizing the Learning Health Care System in an Integrated Delivery System.

    PubMed

    Psek, Wayne; Davis, F Daniel; Gerrity, Gloria; Stametz, Rebecca; Bailey-Davis, Lisa; Henninger, Debra; Sellers, Dorothy; Darer, Jonathan

    2016-01-01

    Healthcare leaders need operational strategies that support organizational learning for continued improvement and value generation. The learning health system (LHS) model may provide leaders with such strategies; however, little is known about leaders' perspectives on the value and application of system-wide operationalization of the LHS model. The objective of this project was to solicit and analyze senior health system leaders' perspectives on the LHS and learning activities in an integrated delivery system. A series of interviews were conducted with 41 system leaders from a broad range of clinical and administrative areas across an integrated delivery system. Leaders' responses were categorized into themes. Ten major themes emerged from our conversations with leaders. While leaders generally expressed support for the concept of the LHS and enhanced system-wide learning, their concerns and suggestions for operationalization where strongly aligned with their functional area and strategic goals. Our findings suggests that leaders tend to adopt a very pragmatic approach to learning. Leaders expressed a dichotomy between the operational imperative to execute operational objectives efficiently and the need for rigorous evaluation. Alignment of learning activities with system-wide strategic and operational priorities is important to gain leadership support and resources. Practical approaches to addressing opportunities and challenges identified in the themes are discussed. Continuous learning is an ongoing, multi-disciplinary function of a health care delivery system. Findings from this and other research may be used to inform and prioritize system-wide learning objectives and strategies which support reliable, high value care delivery.

  6. Genetic association between human chitinases and lung function in COPD.

    PubMed

    Aminuddin, F; Akhabir, L; Stefanowicz, D; Paré, P D; Connett, J E; Anthonisen, N R; Fahy, J V; Seibold, M A; Burchard, E G; Eng, C; Gulsvik, A; Bakke, P; Cho, M H; Litonjua, A; Lomas, D A; Anderson, W H; Beaty, T H; Crapo, J D; Silverman, E K; Sandford, A J

    2012-07-01

    Two primary chitinases have been identified in humans--acid mammalian chitinase (AMCase) and chitotriosidase (CHIT1). Mammalian chitinases have been observed to affect the host's immune response. The aim of this study was to test for association between genetic variation in the chitinases and phenotypes related to chronic obstructive pulmonary disease (COPD). Polymorphisms in the chitinase genes were selected based on previous associations with respiratory diseases. Polymorphisms that were associated with lung function level or rate of decline in the Lung Health Study (LHS) cohort were analyzed for association with COPD affection status in four other COPD case-control populations. Chitinase activity and protein levels were also related to genotypes. In the caucasian LHS population, the baseline forced expiratory volume in one second (FEV(1)) was significantly different between the AA and GG genotypic groups of the AMCase rs3818822 polymorphism. Subjects with the GG genotype had higher AMCase protein and chitinase activity compared with AA homozygotes. For CHIT1 rs2494303, a significant association was observed between rate of decline in FEV(1) and the different genotypes. In the African American LHS population, CHIT1 rs2494303 and AMCase G339T genotypes were associated with rate of decline in FEV(1). Although a significant effect of chitinase gene alleles was found on lung function level and decline in the LHS, we were unable to replicate the associations with COPD affection status in the other COPD study groups.

  7. Serum PARC/CCL-18 Concentrations and Health Outcomes in Chronic Obstructive Pulmonary Disease

    PubMed Central

    Sin, Don D.; Miller, Bruce E.; Duvoix, Annelyse; Man, S. F. Paul; Zhang, Xuekui; Silverman, Edwin K.; Connett, John E.; Anthonisen, Nicholas A.; Wise, Robert A.; Tashkin, Donald; Celli, Bartolome R.; Edwards, Lisa D.; Locantore, Nicholas; MacNee, William; Tal-Singer, Ruth; Lomas, David A.

    2011-01-01

    Rationale: There are no accepted blood-based biomarkers in chronic obstructive pulmonary disease (COPD). Pulmonary and activation-regulated chemokine (PARC/CCL-18) is a lung-predominant inflammatory protein that is found in serum. Objectives: To determine whether PARC/CCL-18 levels are elevated and modifiable in COPD and to determine their relationship to clinical end points of hospitalization and mortality. Methods: PARC/CCL-18 was measured in serum samples from individuals who participated in the ECLIPSE (Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints) and LHS (Lung Health Study) studies and a prednisolone intervention study. Measurements and Main Results: Serum PARC/CCL-18 levels were higher in subjects with COPD than in smokers or lifetime nonsmokers without COPD (105 vs. 81 vs. 80 ng/ml, respectively; P < 0.0001). Elevated PARC/CCL-18 levels were associated with increased risk of cardiovascular hospitalization or mortality in the LHS cohort and with total mortality in the ECLIPSE cohort. Conclusions: Serum PARC/CCL-18 levels are elevated in COPD and track clinical outcomes. PARC/CCL-18, a lung-predominant chemokine, could be a useful blood biomarker in COPD. Clinical trial registered with www.clinicaltrials.gov (NCT 00292552). PMID:21216880

  8. Hypercube technology

    NASA Technical Reports Server (NTRS)

    Parker, Jay W.; Cwik, Tom; Ferraro, Robert D.; Liewer, Paulett C.; Patterson, Jean E.

    1991-01-01

    The JPL designed MARKIII hypercube supercomputer has been in application service since June 1988 and has had successful application to a broad problem set including electromagnetic scattering, discrete event simulation, plasma transport, matrix algorithms, neural network simulation, image processing, and graphics. Currently, problems that are not homogeneous are being attempted, and, through this involvement with real world applications, the software is evolving to handle the heterogeneous class problems efficiently.

  9. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  10. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.

  11. Communication overhead on the Intel iPSC-860 hypercube

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1990-01-01

    Experiments were conducted on the Intel iPSC-860 hypercube in order to evaluate the overhead of interprocessor communication. It is demonstrated that: (1) contrary to popular belief, the distance between two communicating processors has a significant impact on communication time, (2) edge contention can increase communication time by a factor of more than 7, and (3) node contention has no measurable impact.

  12. Vitamin D receptor variability and physical activity are jointly associated with low handgrip strength and osteoporosis in community-dwelling elderly people in Taiwan: the Taichung Community Health Study for Elders (TCHS-E).

    PubMed

    Wu, F-Y; Liu, C-S; Liao, L-N; Li, C-I; Lin, C-H; Yang, C-W; Meng, N-H; Lin, W-Y; Chang, C-K; Hsiao, J-H; Li, T-C; Lin, C-C

    2014-07-01

    We studied 472 elders to assess joint association of vitamin D receptor (VDR) variability and physical activity on low handgrip strength (LHS) and osteoporosis (OST). Our findings showed that higher risks of OST were associated with physically inactive elders with some specific VDR variations, highlighting the importance of promotion program for physical activity. The aim of this study was to determine the joint association between VDR variability and physical activity on LHS and OST in community-dwelling elders. Bone mineral density of the lumbar spine (LS), the femoral neck (FN), and the total hip were measured by dual-energy X-ray absorptiometry. Four single-nucleotide polymorphisms (SNPs) (rs7975232, rs1544410, rs2239185, and rs3782905) of the VDR gene were examined in 472 participants. Physical inactivity and each of the four SNPs were jointly associated with a significantly greater risk of LHS in people than that associated with each of the VDR SNPs or low physical activity alone. Physically inactive men with the AG or AA genotype of rs2239185 had a significantly greater risk of overall, LS, and FN OST than those of physically active men with the GG genotype [odds ratio (OR) 3.57, 95 % confidence interval (CI) 1.10-11.65; OR 4.74, 95 % CI 1.43-15.70; and OR 5.06, 95 % CI 1.08-23.71, respectively]. Similarly, physically inactive women with the CG or CC genotype of rs3782905 and the AG or AA genotype of rs1544410 had a significantly greater risk of FN OST than physically active women with the GG genotype (OR 5.33, 95 % CI 1.23-23.06 and OR 5.36, 95 % CI 1.11-25.94, respectively). VDR polymorphisms and physical activity are jointly associated with LHS and OST in elders. Health care programs should promote physical activity among elders as a cost-effective way to prevent LHS and OST, especially in those who may be genetically predisposed.

  13. Architectural frameworks: defining the structures for implementing learning health systems.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Fung-Kee-Fung, Michael; Jones, Lori; Grudniewicz, Agnes

    2017-06-23

    The vision of transforming health systems into learning health systems (LHSs) that rapidly and continuously transform knowledge into improved health outcomes at lower cost is generating increased interest in government agencies, health organizations, and health research communities. While existing initiatives demonstrate that different approaches can succeed in making the LHS vision a reality, they are too varied in their goals, focus, and scale to be reproduced without undue effort. Indeed, the structures necessary to effectively design and implement LHSs on a larger scale are lacking. In this paper, we propose the use of architectural frameworks to develop LHSs that adhere to a recognized vision while being adapted to their specific organizational context. Architectural frameworks are high-level descriptions of an organization as a system; they capture the structure of its main components at varied levels, the interrelationships among these components, and the principles that guide their evolution. Because these frameworks support the analysis of LHSs and allow their outcomes to be simulated, they act as pre-implementation decision-support tools that identify potential barriers and enablers of system development. They thus increase the chances of successful LHS deployment. We present an architectural framework for LHSs that incorporates five dimensions-goals, scientific, social, technical, and ethical-commonly found in the LHS literature. The proposed architectural framework is comprised of six decision layers that model these dimensions. The performance layer models goals, the scientific layer models the scientific dimension, the organizational layer models the social dimension, the data layer and information technology layer model the technical dimension, and the ethics and security layer models the ethical dimension. We describe the types of decisions that must be made within each layer and identify methods to support decision-making. In this paper, we outline a high-level architectural framework grounded in conceptual and empirical LHS literature. Applying this architectural framework can guide the development and implementation of new LHSs and the evolution of existing ones, as it allows for clear and critical understanding of the types of decisions that underlie LHS operations. Further research is required to assess and refine its generalizability and methods.

  14. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  15. Dementia and severity of parkinsonism determines the handicap of patients in late-stage Parkinson's disease: the Barcelona-Lisbon cohort.

    PubMed

    Coelho, M; Marti, M J; Sampaio, C; Ferreira, J J; Valldeoriola, F; Rosa, M M; Tolosa, E

    2015-02-01

    Handicap has not been explored as a patient-centred outcome measure in Parkinson's disease (PD). The clinical features and medication use in late stages of PD (LS-PD) were reported previously. Handicap, medical conditions, use of healthcare resources and the impact of LS-PD upon caregivers were characterized in a cross-sectional study of LS-PD stages 4 or 5 of Hoehn and Yahr (H&Y). Handicap was measured using the London Handicap Scale (LHS: 0, maximal handicap; 1, no handicap). The mean LHS score in 50 patients was 0.33 (SD ±0.15). The presence of dementia, the Unified Parkinson's Disease Rating Scale part I score and the H&Y stage in 'off' independently predicted the LHS score (adjusted R(2) = 0.62; P = 0.000). Comorbidities and past medical conditions were frequent. Thirty-five patients lived at their house. Forty-five received unpaid care. Mean visits to the family doctor in the preceding 6 months were 2.2 (SD ±3.0) and to a neurologist 1.7 (SD ±1.0). Use of other health resources was low. Unpaid caregivers spent much time with patients and reported a high burden. Handicap could be measured in LS-PD and the LHS was easily completed by patients and caregivers. The high handicap in our cohort was mostly driven by the presence of dementia, behavioural complaints and the severity of non-dopaminergic motor features. Patients visited doctors infrequently and made low use of health resources, whilst unpaid caregivers reported a high burden. © 2014 EAN.

  16. Leadership Perspectives on Operationalizing the Learning Health Care System in an Integrated Delivery System

    PubMed Central

    Psek, Wayne; Davis, F. Daniel; Gerrity, Gloria; Stametz, Rebecca; Bailey-Davis, Lisa; Henninger, Debra; Sellers, Dorothy; Darer, Jonathan

    2016-01-01

    Introduction: Healthcare leaders need operational strategies that support organizational learning for continued improvement and value generation. The learning health system (LHS) model may provide leaders with such strategies; however, little is known about leaders’ perspectives on the value and application of system-wide operationalization of the LHS model. The objective of this project was to solicit and analyze senior health system leaders’ perspectives on the LHS and learning activities in an integrated delivery system. Methods: A series of interviews were conducted with 41 system leaders from a broad range of clinical and administrative areas across an integrated delivery system. Leaders’ responses were categorized into themes. Findings: Ten major themes emerged from our conversations with leaders. While leaders generally expressed support for the concept of the LHS and enhanced system-wide learning, their concerns and suggestions for operationalization where strongly aligned with their functional area and strategic goals. Discussion: Our findings suggests that leaders tend to adopt a very pragmatic approach to learning. Leaders expressed a dichotomy between the operational imperative to execute operational objectives efficiently and the need for rigorous evaluation. Alignment of learning activities with system-wide strategic and operational priorities is important to gain leadership support and resources. Practical approaches to addressing opportunities and challenges identified in the themes are discussed. Conclusion: Continuous learning is an ongoing, multi-disciplinary function of a health care delivery system. Findings from this and other research may be used to inform and prioritize system-wide learning objectives and strategies which support reliable, high value care delivery. PMID:27683668

  17. Variability of spirometry in chronic obstructive pulmonary disease: results from two clinical trials.

    PubMed

    Herpel, Laura B; Kanner, Richard E; Lee, Shing M; Fessler, Henry E; Sciurba, Frank C; Connett, John E; Wise, Robert A

    2006-05-15

    Our goal is to determine short-term intraindividual biologic and measurement variability in spirometry of patients with a wide range of stable chronic obstructive pulmonary disease severity, using datasets from the National Emphysema Treatment Trial (NETT) and the Lung Health Study (LHS). This may be applied to determine criteria that can be used to assess a clinically meaningful change in spirometry. A total of 5,886 participants from the LHS and 1,215 participants from the NETT performed prebronchodilator spirometry during two baseline sessions. We analyzed varying criteria for absolute and percent change of FEV(1) and FVC to determine which criterion was met by 90% of the participants. The mean +/- SD FEV(1) for the initial session was 2.64 +/- 0.60 L (75.1 +/- 8.8% predicted) for the LHS and 0.68 +/- 0.22 L (23.7 +/- 6.5% predicted) for the NETT. The mean +/- SD number of days between test sessions was 24.9 +/- 17.1 for the LHS and 85.7 +/- 21.7 for the NETT. As the degree of obstruction increased, the intersession percent difference of FEV(1) increased. However, the absolute difference between tests remained relatively constant despite the severity of obstruction (0.106 +/- 0.10 L). Over 90% of participants had an intersession FEV(1) difference of less than 225 ml irrespective of the severity of obstruction. Absolute changes in FEV(1) rather than percent change should be used to determine whether patients with chronic obstructive pulmonary disease have improved or worsened between test sessions.

  18. Realistic Vascular Replicator for TAVR Procedures.

    PubMed

    Rotman, Oren M; Kovarovic, Brandon; Sadasivan, Chander; Gruberg, Luis; Lieber, Baruch B; Bluestein, Danny

    2018-04-13

    Transcatheter aortic valve replacement (TAVR) is an over-the-wire procedure for treatment of severe aortic stenosis (AS). TAVR valves are conventionally tested using simplified left heart simulators (LHS). While those provide baseline performance reliably, their aortic root geometries are far from the anatomical in situ configuration, often overestimating the valves' performance. We report on a novel benchtop patient-specific arterial replicator designed for testing TAVR and training interventional cardiologists in the procedure. The Replicator is an accurate model of the human upper body vasculature for training physicians in percutaneous interventions. It comprises of fully-automated Windkessel mechanism to recreate physiological flow conditions. Calcified aortic valve models were fabricated and incorporated into the Replicator, then tested for performing TAVR procedure by an experienced cardiologist using the Inovare valve. EOA, pressures, and angiograms were monitored pre- and post-TAVR. A St. Jude mechanical valve was tested as a reference that is less affected by the AS anatomy. Results in the Replicator of both valves were compared to the performance in a commercial ISO-compliant LHS. The AS anatomy in the Replicator resulted in a significant decrease of the TAVR valve performance relative to the simplified LHS, with EOA and transvalvular pressures comparable to clinical data. Minor change was seen in the mechanical valve performance. The Replicator showed to be an effective platform for TAVR testing. Unlike a simplified geometric anatomy LHS, it conservatively provides clinically-relevant outcomes and complement it. The Replicator can be most valuable for testing new valves under challenging patient anatomies, physicians training, and procedural planning.

  19. [Analysis of the pediatric trauma score in patients wounded with shrapnel; the effect of explosives with high kinetic energy: results of the first intervention center].

    PubMed

    Taş, Hüseyin; Mesci, Ayhan; Demirbağ, Suzi; Eryılmaz, Mehmet; Yiğit, Taner; Peker, Yusuf

    2013-03-01

    We aimed to assess the pediatric trauma score analysis in pediatric trauma cases due to shrapnel effect of explosives material with high kinetic energy. The data of 17 pediatric injuries were reviewed retrospectively between February 2002 and August 2005. The information about age, gender, trauma-hospital interval, trauma mechanism, the injured organs, pediatric Glasgow coma score (PGCS), pediatric trauma score (PTS), hemodynamic parameters, blood transfusion, interventions and length of hospital stay (LHS) were investigated. While all patients suffered from trauma to the extremities, only four patients had traumatic lower-limb amputation. Transportation time was <=1 hour in 35% of cases, and >1 hour in 65% of cases. While PTS was found as <=8 in 35.3% of cases (n=6), the score was found to be higher than 8 in 64.7% of them (n=11). Median heart rate in patients with PTS <=8 was 94 beats/min. This value was 70 beats/min in those with PTS >8 (p=0.007). Morbidity rates of PTS <=8 cases and PTS >8 cases were 29.4% and 5.9%, respectively (p=0.026). While LHS was 22.8 days in PTS <=8 cases, LHS was found to be only 4 days in PTS >8 cases. This difference was found to be statistically significant (p=0.001). PTS is very efficient and a time-saving procedure to assess the severity of trauma caused by the shrapnel effect. The median heart rate, morbidity, and LHS increased significantly in patients with PTS <=8.

  20. Associations between psychometrically assessed life history strategy and daily behavior: data from the Electronically Activated Recorder (EAR)

    PubMed Central

    2018-01-01

    Life history theory has generated cogent, well-supported hypotheses about individual differences in human biodemographic traits (e.g., age at sexual maturity) and psychometric traits (e.g., conscientiousness), but little is known about how variation in life history strategy (LHS) is manifest in quotidian human behavior. Here I test predicted associations between the self-report Arizona Life History Battery and frequencies of 12 behaviors observed over 72 h in 91 US college students using the Electronically Activated Recorder (EAR), a method of gathering periodic brief audio recordings as participants go about their daily lives. Bayesian multi-level aggregated binomial regression analysis found no strong associations between ALHB scores and behavior frequencies. One behavior, presence at amusement venues (bars, concerts, sports events) was weakly positively associated with ALHB-assessed slow LHS, contrary to prediction. These results may represent a challenge to the ALHB’s validity. However, it remains possible that situational influences on behavior, which were not measured in the present study, moderate the relationships between psychometrically-assessed LHS and quotidian behavior. PMID:29868275

  1. Associations between psychometrically assessed life history strategy and daily behavior: data from the Electronically Activated Recorder (EAR).

    PubMed

    Manson, Joseph H

    2018-01-01

    Life history theory has generated cogent, well-supported hypotheses about individual differences in human biodemographic traits (e.g., age at sexual maturity) and psychometric traits (e.g., conscientiousness), but little is known about how variation in life history strategy (LHS) is manifest in quotidian human behavior. Here I test predicted associations between the self-report Arizona Life History Battery and frequencies of 12 behaviors observed over 72 h in 91 US college students using the Electronically Activated Recorder (EAR), a method of gathering periodic brief audio recordings as participants go about their daily lives. Bayesian multi-level aggregated binomial regression analysis found no strong associations between ALHB scores and behavior frequencies. One behavior, presence at amusement venues (bars, concerts, sports events) was weakly positively associated with ALHB-assessed slow LHS, contrary to prediction. These results may represent a challenge to the ALHB's validity. However, it remains possible that situational influences on behavior, which were not measured in the present study, moderate the relationships between psychometrically-assessed LHS and quotidian behavior.

  2. A magnetic model for low/hard state of black hole binaries

    NASA Astrophysics Data System (ADS)

    Wang, Ding-Xiong

    2015-08-01

    A magnetic model for low/hard state (LHS) of black hole X-ray binaries (BHXBs), H1743-322 and GX 339-4, is proposed based on the transportation of magnetic field from a companion into an accretion disc around a black hole (BH). This model consists of a truncated thin disc with an inner advection-dominated accretion flow (ADAF). The spectral profiles of the sources are fitted in agreement with the data observed at four different dates corresponding to the rising stage of the LHS. In addition, the association of the LHS with quasi-steady jet is modelled based on the transportation of magnetic field, where the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes are invoked to drive the jets from BH and inner ADAF. It turns out that the steep radio-X-ray correlations observed in H1743-322 and GX 339-4 can be interpreted based on our model. It is suggested that large-scale magnetic field can be regarded as the second parameter for governing the state transitions in some BHXBs.

  3. Microstructures and strain variation: Evidence of multiple splays in the North Almora Thrust Zone, Kumaun Lesser Himalaya, Uttarakhand, India

    NASA Astrophysics Data System (ADS)

    Joshi, Gaurav; Agarwal, Amar; Agarwal, K. K.; Srivastava, Samriddhi; Alva Valdivia, L. M.

    2017-01-01

    The North Almora Thrust zone (NATZ) marks the boundary of the Almora Crystalline Complex (ACC) against the Lesser Himalayan Sedimentary sequence (LHS) in the north. Its southern counterpart, the South Almora Thrust (SAT), is a sharply marked contact between the ACC and the LHS in the south. Published studies argue various contradictory emplacement modes of the North Almora Thrust. Recent studies have implied splays of smaller back thrusts in the NATZ. The present study investigates meso- and microstructures, and strain distribution in the NATZ and compares it with strain distribution across the SAT. In the NATZ, field evidence reveals repeated sequence of 10-500 m thick slices of proto- to ultra-mylonite, thrust over the Lesser Himalayan Rautgara quartzite. In accordance with the field evidence, the strain analysis reveals effects of splays of smaller thrust in the NATZ. The study therefore, argues that contrary to popular nomenclature the northern contact of the ACC with the LHS is not a single thrust plane, but a thrust zone marked by numerous thrust splays.

  4. RISC-type microprocessors may revolutionize aerospace simulation

    NASA Astrophysics Data System (ADS)

    Jackson, Albert S.

    The author explores the application of RISC (reduced instruction set computer) processors in massively parallel computer (MPC) designs for aerospace simulation. The MPC approach is shown to be well adapted to the needs of aerospace simulation. It is shown that any of the three common types of interconnection schemes used with MPCs are effective for general-purpose simulation, although the bus-or switch-oriented machines are somewhat easier to use. For partial differential equation models, the hypercube approach at first glance appears more efficient because the nearest-neighbor connections required for three-dimensional models are hardwired in a hypercube machine. However, the data broadcast ability of a bus system, combined with the fact that data can be transmitted over a bus as soon as it has been updated, makes the bus approach very competitive with the hypercube approach even for these types of models.

  5. Generalized hypercube structures and hyperswitch communication network

    NASA Technical Reports Server (NTRS)

    Young, Steven D.

    1992-01-01

    This paper discusses an ongoing study that uses a recent development in communication control technology to implement hybrid hypercube structures. These architectures are similar to binary hypercubes, but they also provide added connectivity between the processors. This added connectivity increases communication reliability while decreasing the latency of interprocessor message passing. Because these factors directly determine the speed that can be obtained by multiprocessor systems, these architectures are attractive for applications such as remote exploration and experimentation, where high performance and ultrareliability are required. This paper describes and enumerates these architectures and discusses how they can be implemented with a modified version of the hyperswitch communication network (HCN). The HCN is analyzed because it has three attractive features that enable these architectures to be effective: speed, fault tolerance, and the ability to pass multiple messages simultaneously through the same hyperswitch controller.

  6. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  7. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  8. Geometric steering criterion for two-qubit states

    NASA Astrophysics Data System (ADS)

    Yu, Bai-Chu; Jia, Zhih-Ahn; Wu, Yu-Chun; Guo, Guang-Can

    2018-01-01

    According to the geometric characterization of measurement assemblages and local hidden state (LHS) models, we propose a steering criterion which is both necessary and sufficient for two-qubit states under arbitrary measurement sets. A quantity is introduced to describe the required local resources to reconstruct a measurement assemblage for two-qubit states. We show that the quantity can be regarded as a quantification of steerability and be used to find out optimal LHS models. Finally we propose a method to generate unsteerable states, and construct some two-qubit states which are entangled but unsteerable under all projective measurements.

  9. Development of Drug Delivery Systems Based on Layered Hydroxides for Nanomedicine

    PubMed Central

    Barahuie, Farahnaz; Hussein, Mohd Zobir; Fakurazi, Sharida; Zainal, Zulkarnain

    2014-01-01

    Layered hydroxides (LHs) have recently fascinated researchers due to their wide application in various fields. These inorganic nanoparticles, with excellent features as nanocarriers in drug delivery systems, have the potential to play an important role in healthcare. Owing to their outstanding ion-exchange capacity, many organic pharmaceutical drugs have been intercalated into the interlayer galleries of LHs and, consequently, novel nanodrugs or smart drugs may revolutionize in the treatment of diseases. Layered hydroxides, as green nanoreservoirs with sustained drug release and cell targeting properties hold great promise of improving health and prolonging life. PMID:24802876

  10. Matched-filter algorithm for subpixel spectral detection in hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Borough, Howard C.

    1991-11-01

    Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.

  11. Variability and periodicity of field M dwarfs revealed by multichannel monitoring

    NASA Astrophysics Data System (ADS)

    Rockenfeller, B.; Bailer-Jones, C. A. L.; Mundt, R.

    2006-03-01

    We present simultaneous, multiband photometric monitoring of 19 field dwarfs covering most of the M spectral sequence (M2-M9). Significant variability was found in seven objects in at least one out of the three channels I, R and G. Periodic variability was tested with a CLEAN power spectral analysis. Two objects, LHS370 (M5V) and 2M1707+64 (M9V), show periods of 5.9± 2.0 and 3.65± 0.1 h respectively. On account of the agreement with the typical values of v sin i published for M dwarfs (Mohanty & Basri 2003, ApJ, 583, 451), we claim these to be the objects' rotation periods. Three further objects show possible periods of a few hours. Comparing the variability amplitude in each channel with predictions based on the synthetic spectra of Allard et al. (2001, ApJ, 556, 357), we investigated the source of variability in LHS370 and 2M1707+64. For the latter, we find evidence for the presence of magnetically-induced cool spots at a temperature contrast of 4-8%, with a projected surface coverage factor of less than 0.075. Moreover, we can rule out dust clouds (as represented by the COND or DUSTY models) as the cause of the variability. No conclusion can be drawn in the case of LHS370. Comparing the frequency of occurrence of variability in this and various L dwarf samples published over the past few years, we find that variability is more common in field L dwarfs than in field M dwarfs (for amplitudes larger than 0.005 mag on timescales of 0.5 to 20 h). Using the homogeneous data sets of this work and Bailer-Jones & Mundt (2001, A&A, 367, 218), we find fractions of variable objects of 0.21± 0.11 among field M dwarfs and 0.70± 0.26 among field L dwarfs (and 0.29± 0.13, 0.48± 0.12 respectively if we take into account a larger yet more inhomogeneous sample). This is marginally significant (2σ deviation) and implies a change in the physical nature and/or extent of surface features when moving from M to L dwarfs.

  12. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  13. Concurrent electromagnetic scattering analysis

    NASA Technical Reports Server (NTRS)

    Patterson, Jean E.; Cwik, Tom; Ferraro, Robert D.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Parker, Jay

    1989-01-01

    The computational power of the hypercube parallel computing architecture is applied to the solution of large-scale electromagnetic scattering and radiation problems. Three analysis codes have been implemented. A Hypercube Electromagnetic Interactive Analysis Workstation was developed to aid in the design and analysis of metallic structures such as antennas and to facilitate the use of these analysis codes. The workstation provides a general user environment for specification of the structure to be analyzed and graphical representations of the results.

  14. Asynchronous Communication Scheme For Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Madan, Herb S.

    1988-01-01

    Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.

  15. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  16. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  17. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  18. Novel snapshot hyperspectral imager for fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi

    2018-02-01

    Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.

  19. Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.

    PubMed

    Silva, Eduardo Sant Ana da; Pedrini, Helio

    2016-03-01

    Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A temperate rocky super-Earth transiting a nearby cool star

    NASA Astrophysics Data System (ADS)

    Dittmann, Jason A.; Irwin, Jonathan M.; Charbonneau, David; Bonfils, Xavier; Astudillo-Defru, Nicola; Haywood, Raphaëlle D.; Berta-Thompson, Zachory K.; Newton, Elisabeth R.; Rodriguez, Joseph E.; Winters, Jennifer G.; Tan, Thiam-Guan; Almenara, Jose-Manuel; Bouchy, François; Delfosse, Xavier; Forveille, Thierry; Lovis, Christophe; Murgas, Felipe; Pepe, Francesco; Santos, Nuno C.; Udry, Stephane; Wünsche, Anaël; Esquerdo, Gilbert A.; Latham, David W.; Dressing, Courtney D.

    2017-04-01

    M dwarf stars, which have masses less than 60 per cent that of the Sun, make up 75 per cent of the population of the stars in the Galaxy. The atmospheres of orbiting Earth-sized planets are observationally accessible via transmission spectroscopy when the planets pass in front of these stars. Statistical results suggest that the nearest transiting Earth-sized planet in the liquid-water, habitable zone of an M dwarf star is probably around 10.5 parsecs away. A temperate planet has been discovered orbiting Proxima Centauri, the closest M dwarf, but it probably does not transit and its true mass is unknown. Seven Earth-sized planets transit the very low-mass star TRAPPIST-1, which is 12 parsecs away, but their masses and, particularly, their densities are poorly constrained. Here we report observations of LHS 1140b, a planet with a radius of 1.4 Earth radii transiting a small, cool star (LHS 1140) 12 parsecs away. We measure the mass of the planet to be 6.6 times that of Earth, consistent with a rocky bulk composition. LHS 1140b receives an insolation of 0.46 times that of Earth, placing it within the liquid-water, habitable zone. With 90 per cent confidence, we place an upper limit on the orbital eccentricity of 0.29. The circular orbit is unlikely to be the result of tides and therefore was probably present at formation. Given its large surface gravity and cool insolation, the planet may have retained its atmosphere despite the greater luminosity (compared to the present-day) of its host star in its youth. Because LHS 1140 is nearby, telescopes currently under construction might be able to search for specific atmospheric gases in the future.

  1. A temperate rocky super-Earth transiting a nearby cool star.

    PubMed

    Dittmann, Jason A; Irwin, Jonathan M; Charbonneau, David; Bonfils, Xavier; Astudillo-Defru, Nicola; Haywood, Raphaëlle D; Berta-Thompson, Zachory K; Newton, Elisabeth R; Rodriguez, Joseph E; Winters, Jennifer G; Tan, Thiam-Guan; Almenara, Jose-Manuel; Bouchy, François; Delfosse, Xavier; Forveille, Thierry; Lovis, Christophe; Murgas, Felipe; Pepe, Francesco; Santos, Nuno C; Udry, Stephane; Wünsche, Anaël; Esquerdo, Gilbert A; Latham, David W; Dressing, Courtney D

    2017-04-19

    M dwarf stars, which have masses less than 60 per cent that of the Sun, make up 75 per cent of the population of the stars in the Galaxy. The atmospheres of orbiting Earth-sized planets are observationally accessible via transmission spectroscopy when the planets pass in front of these stars. Statistical results suggest that the nearest transiting Earth-sized planet in the liquid-water, habitable zone of an M dwarf star is probably around 10.5 parsecs away. A temperate planet has been discovered orbiting Proxima Centauri, the closest M dwarf, but it probably does not transit and its true mass is unknown. Seven Earth-sized planets transit the very low-mass star TRAPPIST-1, which is 12 parsecs away, but their masses and, particularly, their densities are poorly constrained. Here we report observations of LHS 1140b, a planet with a radius of 1.4 Earth radii transiting a small, cool star (LHS 1140) 12 parsecs away. We measure the mass of the planet to be 6.6 times that of Earth, consistent with a rocky bulk composition. LHS 1140b receives an insolation of 0.46 times that of Earth, placing it within the liquid-water, habitable zone. With 90 per cent confidence, we place an upper limit on the orbital eccentricity of 0.29. The circular orbit is unlikely to be the result of tides and therefore was probably present at formation. Given its large surface gravity and cool insolation, the planet may have retained its atmosphere despite the greater luminosity (compared to the present-day) of its host star in its youth. Because LHS 1140 is nearby, telescopes currently under construction might be able to search for specific atmospheric gases in the future.

  2. Donepezil is associated with decreased in-hospital mortality as a result of pneumonia among older patients with dementia: A retrospective cohort study.

    PubMed

    Abe, Yasuko; Shimokado, Kentaro; Fushimi, Kiyohide

    2018-02-01

    Pneumonia is one of the major causes of mortality in older adults. As the average lifespan has extended and new modalities to prevent or treat pneumonia are developed, the factors that affect the length of hospital stay (LHS) and in-hospital mortality of older patients with pneumonia have changed. The object of the present study was to determine the factors associated with LHS and mortality as a result of pneumonia among older patients with dementia. With a retrospective cohort study design, we used the data derived from the Japanese Administrative Database and diagnosis procedure combination/per diem payment system (DPC/PDPS) database. There were 39 336 admissions of older patients for pneumonia between August 2010 and March 2012. Patients with incomplete data were excluded, leaving 25 602 patients for analysis. Having dementia decreased mortality (OR 0.71, P < 0.001) and increased LHS. Multiple logistic regression analysis identified donepezil as an independent factor that decreased mortality in patients with dementia (OR 0.36, P < 0.001). Donepezil was prescribed for 28.7% of these patients, and their mortality rate was significantly lower than those of patients with dementia who were not treated with donepezil and of patients without dementia. The mortality rate was higher for patients with dementia who were not treated with donepezil compared with patients who did not have dementia. All other factors that influenced LHS and mortality were similar to those reported by others. Donepezil seems to decrease in-hospital mortality as a result of pneumonia among older patients with dementia. Geriatr Gerontol Int 2018; 18: 269-275. © 2017 Japan Geriatrics Society.

  3. Changes in antioxidants are critical in determining cell responses to short- and long-term heat stress.

    PubMed

    Sgobba, Alessandra; Paradiso, Annalisa; Dipierro, Silvio; De Gara, Laura; de Pinto, Maria Concetta

    2015-01-01

    Heat stress can have deleterious effects on plant growth by impairing several physiological processes. Plants have several defense mechanisms that enable them to cope with high temperatures. The synthesis and accumulation of heat shock proteins (HSPs), as well as the maintenance of an opportune redox balance play key roles in conferring thermotolerance to plants. In this study changes in redox parameters, the activity and/or expression of reactive oxygen species (ROS) scavenging enzymes and the expression of two HSPs were studied in tobacco Bright Yellow-2 (TBY-2) cells subjected to moderate short-term heat stress (SHS) and long-term heat stress (LHS). The results indicate that TBY-2 cells subjected to SHS suddenly and transiently enhance antioxidant systems, thus maintaining redox homeostasis and avoiding oxidative damage. The simultaneous increase in HSPs overcomes the SHS and maintains the metabolic functionality of cells. In contrast the exposure of cells to LHS significantly reduces cell growth and increases cell death. In the first phase of LHS, cells enhance antioxidant systems to prevent the formation of an oxidizing environment. Under prolonged heat stress, the antioxidant systems, and particularly the enzymatic ones, are inactivated. As a consequence, an increase in H2 O2 , lipid peroxidation and protein oxidation occurs. This establishment of oxidative stress could be responsible for the increased cell death. The rescue of cell growth and cell viability, observed when TBY-2 cells were pretreated with galactone-γ-lactone, the last precursor of ascorbate, and glutathione before exposure to LHS, highlights the crucial role of antioxidants in the acquisition of basal thermotolerance. © 2014 Scandinavian Plant Physiology Society.

  4. Early Discharge in Low-Risk Patients Hospitalized for Acute Coronary Syndromes: Feasibility, Safety and Reasons for Prolonged Length of Stay.

    PubMed

    Laurencet, Marie-Eva; Girardin, François; Rigamonti, Fabio; Bevand, Anne; Meyer, Philippe; Carballo, David; Roffi, Marco; Noble, Stéphane; Mach, François; Gencer, Baris

    2016-01-01

    Length of hospital stay (LHS) is an indicator of clinical effectiveness. Early hospital discharge (≤72 hours) is recommended in patients with acute coronary syndromes (ACS) at low risk of complications, but reasons for prolonged LHS poorly reported. We collected data of ACS patients hospitalized at the Geneva University Hospitals from 1st July 2013 to 30th June 2015 and used the Zwolle index score to identify patients at low risk (≤ 3 points). We assessed the proportion of eligible patients who were successfully discharged within 72 hours and the reasons for prolonged LHS. Outcomes were defined as adherence to recommended therapies, major adverse events at 30 days and patients' satisfaction using a Likert-scale patient-reported questionnaire. Among 370 patients with ACS, 255 (68.9%) were at low-risk of complications but only 128 (50.2%)were eligible for early discharge, because of other clinical reasons for prolonged LHS (e.g. staged coronary revascularization, cardiac monitoring) in 127 patients (49.8%). Of the latter, only 45 (35.2%) benefitted from an early discharge. Reasons for delay in discharge in the remaining 83 patients (51.2%) were mainly due to delays in additional investigations, titration of medical therapy, admission or discharge during weekends. In the early discharge group, at 30 days, only one patient (2.2%) had an adverse event (minor bleeding), 97% of patients were satisfied by the medical care. Early discharge was successfully achieved in one third of eligible ACS patients at low risk of complications and appeared sufficiently safe while being overall appreciated by the patients.

  5. Serum albumin levels in burn people are associated to the total body surface burned and the length of hospital stay but not to the initiation of the oral/enteral nutrition

    PubMed Central

    Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; DeRosier, Leo C; de la Torre, Jorge I

    2013-01-01

    Objective: Serum albumin levels have been used to evaluate the severity of the burns and the nutrition protein status in burn people, specifically in the response of the burn patient to the nutrition. Although it hasn’t been proven if all these associations are fully funded. The aim of this retrospective study was to determine the relationship of serum albumin levels at 3-7 days after the burn injury, with the total body surface area burned (TBSA), the length of hospital stay (LHS) and the initiation of the oral/enteral nutrition (IOEN). Subject and methods: It was carried out with the health records of patients that accomplished the inclusion criteria and were admitted to the burn units at the University Hospital of Reina Sofia (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA) over a 10 years period, between January 2000 and December 2009. We studied the statistical association of serum albumin levels with the TBSA, LHS and IOEN by ANOVA one way test. The confidence interval chosen for statistical differences was 95%. Duncan’s test was used to determine the number of statistically significantly groups. Results: Were expressed as mean±standard deviation. We found serum albumin levels association with TBSA and LHS, with greater to lesser serum albumin levels found associated to lesser to greater TBSA and LHS. We didn’t find statistical association with IOEN. Conclusion: We conclude that serum albumin levels aren’t a nutritional marker in burn people although they could be used as a simple clinical tool to identify the severity of the burn wounds represented by the total body surface area burned and the lenght of hospital stay. PMID:23875122

  6. No apparent association between bipolar disorder and cancer in a large epidemiological study of outpatients in a managed care population.

    PubMed

    Kahan, Natan R; Silverman, Barbara; Liphshitz, Irena; Waitman, Dan-Andrei; Ben-Zion, Itzhak; Ponizovsky, Alexander M; Weizman, Abraham; Grinshpoon, Alexander

    2018-03-01

    An association between bipolar disorder (BD) and cancer risk has been reported. The purpose of this study was to investigate this association through linkage analysis of a national HMO database and a national cancer registry. All members of the Leumit Health Services (LHS) HMO of Israel from 2000 to 2012 were included. Members with a recorded diagnosis of BD and a record of at least one written or dispensed prescription for pharmacotherapy for treatment of BD were classified as patients with BD. We linked the LHS population with the Israel National Cancer Registry database to capture all cases of cancer reported. Standardized incidence ratios (SIRs) for cancer in the BD population as compared with non-BD LHS members were calculated. A total of 870 323 LHS members were included in the analysis; 3304 of whom met the criteria for inclusion in the BD arm. We identified 24 515 and 110 cancer cases among members without BD and with BD, respectively. Persons with BD were no more likely than other HMO members to be diagnosed with cancer during the follow-up period [SIR, males=0.91, 95% confidence interval (CI): 0.66-1.22; SIR, females=1.15, 95% CI: 0.89-1.47]. Sensitivity analysis using different criteria for positive BD classification (lithium treatment alone or registered physician diagnosis) had no effect on the estimate of cancer risk. A nonstatistically significant association between breast cancer and BD among women was observed (SIR=1.24, 95% CI: 0.79-1.86). These findings do not corroborate previously reported associations between BD and elevated cancer risk.

  7. Serum albumin levels in burn people are associated to the total body surface burned and the length of hospital stay but not to the initiation of the oral/enteral nutrition.

    PubMed

    Pérez-Guisado, Joaquín; de Haro-Padilla, Jesús M; Rioja, Luis F; Derosier, Leo C; de la Torre, Jorge I

    2013-01-01

    Serum albumin levels have been used to evaluate the severity of the burns and the nutrition protein status in burn people, specifically in the response of the burn patient to the nutrition. Although it hasn't been proven if all these associations are fully funded. The aim of this retrospective study was to determine the relationship of serum albumin levels at 3-7 days after the burn injury, with the total body surface area burned (TBSA), the length of hospital stay (LHS) and the initiation of the oral/enteral nutrition (IOEN). It was carried out with the health records of patients that accomplished the inclusion criteria and were admitted to the burn units at the University Hospital of Reina Sofia (Córdoba, Spain) and UAB Hospital at Birmingham (Alabama, USA) over a 10 years period, between January 2000 and December 2009. We studied the statistical association of serum albumin levels with the TBSA, LHS and IOEN by ANOVA one way test. The confidence interval chosen for statistical differences was 95%. Duncan's test was used to determine the number of statistically significantly groups. Were expressed as mean±standard deviation. We found serum albumin levels association with TBSA and LHS, with greater to lesser serum albumin levels found associated to lesser to greater TBSA and LHS. We didn't find statistical association with IOEN. We conclude that serum albumin levels aren't a nutritional marker in burn people although they could be used as a simple clinical tool to identify the severity of the burn wounds represented by the total body surface area burned and the lenght of hospital stay.

  8. Predicting prolonged length of hospital stay in older emergency department users: use of a novel analysis method, the Artificial Neural Network.

    PubMed

    Launay, C P; Rivière, H; Kabeshova, A; Beauchet, O

    2015-09-01

    To examine performance criteria (i.e., sensitivity, specificity, positive predictive value [PPV], negative predictive value [NPV], likelihood ratios [LR], area under receiver operating characteristic curve [AUROC]) of a 10-item brief geriatric assessment (BGA) for the prediction of prolonged length hospital stay (LHS) in older patients hospitalized in acute care wards after an emergency department (ED) visit, using artificial neural networks (ANNs); and to describe the contribution of each BGA item to the predictive accuracy using the AUROC value. A total of 993 geriatric ED users admitted to acute care wards were included in this prospective cohort study. Age >85years, gender male, polypharmacy, non use of formal and/or informal home-help services, history of falls, temporal disorientation, place of living, reasons and nature for ED admission, and use of psychoactive drugs composed the 10 items of BGA and were recorded at the ED admission. The prolonged LHS was defined as the top third of LHS. The ANNs were conducted using two feeds forward (multilayer perceptron [MLP] and modified MLP). The best performance was reported with the modified MLP involving the 10 items (sensitivity=62.7%; specificity=96.6%; PPV=87.1; NPV=87.5; positive LR=18.2; AUC=90.5). In this model, presence of chronic conditions had the highest contributions (51.3%) in AUROC value. The 10-item BGA appears to accurately predict prolonged LHS, using the ANN MLP method, showing the best criteria performance ever reported until now. Presence of chronic conditions was the main contributor for the predictive accuracy. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  9. Optimization Technology of the LHS-1 Strain for Degrading Gallnut Water Extract and Appraisal of Benzene Ring Derivatives from Fermented Gallnut Water Extract Pyrolysis by Py-GC/MS.

    PubMed

    Wang, Chengzhang; Li, Wenjun

    2017-12-20

    Gallnut water extract (GWE) enriches 80~90% of gallnut tannic acid (TA). In order to study the biodegradation of GWE into gallic acid (GA), the LHS-1 strain, a variant of Aspergillus niger , was chosen to determine the optimal degradation parameters for maximum production of GA by the response surface method. Pyrolysis-gas chromatography-mass spectrometry (Py-GC/MS) was first applied to appraise benzene ring derivatives of fermented GWE (FGWE) pyrolysis by comparison with the pyrolytic products of a tannic acid standard sample (TAS) and GWE. The results showed that optimum conditions were at 31 °C and pH of 5, with a 50-h incubation period and 0.1 g·L -1 of TA as substrate. The maximum yields of GA and tannase were 63~65 mg·mL -1 and 1.17 U·mL -1 , respectively. Over 20 kinds of compounds were identified as linear hydrocarbons and benzene ring derivatives based on GA and glucose. The key benzene ring derivatives were 3,4,5-trimethoxybenzoic acid methyl ester, 3-methoxy-1,2-benzenediol, and 4-hydroxy-3,5-dimethoxy-benzoic acid hydrazide.

  10. Sensitivity analysis of static resistance of slender beam under bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valeš, Jan

    2016-06-08

    The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.

  11. Multiphase complete exchange on a circuit switched hypercube

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1991-01-01

    On a distributed memory parallel computer, the complete exchange (all-to-all personalized) communication pattern requires each of n processors to send a different block of data to each of the remaining n - 1 processors. This pattern is at the heart of many important algorithms, most notably the matrix transpose. For a circuit switched hypercube of dimension d(n = 2(sup d)), two algorithms for achieving complete exchange are known. These are (1) the Standard Exchange approach that employs d transmissions of size 2(sup d-1) blocks each and is useful for small block sizes, and (2) the Optimal Circuit Switched algorithm that employs 2(sup d) - 1 transmissions of 1 block each and is best for large block sizes. A unified multiphase algorithm is described that includes these two algorithms as special cases. The complete exchange on a hypercube of dimension d and block size m is achieved by carrying out k partial exchange on subcubes of dimension d(sub i) Sigma(sup k)(sub i=1) d(sub i) = d and effective block size m(sub i) = m2(sup d-di). When k = d and all d(sub i) = 1, this corresponds to algorithm (1) above. For the case of k = 1 and d(sub i) = d, this becomes the circuit switched algorithm (2). Changing the subcube dimensions d, varies the effective block size and permits a compromise between the data permutation and block transmission overhead of (1) and the startup overhead of (2). For a hypercube of dimension d, the number of possible combinations of subcubes is p(d), the number of partitions of the integer d. This is an exponential but very slowly growing function and it is feasible over these partitions to discover the best combination for a given message size. The approach was analyzed for, and implemented on, the Intel iPSC-860 circuit switched hypercube. Measurements show good agreement with predictions and demonstrate that the multiphase approach can substantially improve performance for block sizes in the 0 to 160 byte range. This range, which corresponds to 0 to 40 floating point numbers per processor, is commonly encountered in practical numeric applications. The multiphase technique is applicable to all circuit-switched hypercubes that use the common e-cube routing strategy.

  12. Revolution or evolution? An analysis of E-health innovation and impact using a hypercube model.

    PubMed

    Wu, Jen-Her; Huang, An-Sheng; Hisa, Tzyh-Lih; Tsai, Hsien-Tang

    2006-01-01

    This study utilises a hypercube innovation model to analyse the changes in both healthcare informatics and medical related delivery models based on the innovations from Tele-healthcare, electronic healthcare (E-healthcare), to mobile healthcare (M-healthcare). Further, the critical impacts of these E-health innovations on the stakeholders: healthcare customers, hospitals, healthcare complementary providers and healthcare regulators are identified. Thereafter, the critical capabilities for adopting each innovation are discussed.

  13. Guidance and control of MIR TDL radiation via flexible hollow metallic rectangular pipes and fibers for possible LHS and other optical system compaction and integration

    NASA Technical Reports Server (NTRS)

    Yu, C.

    1983-01-01

    Flexible hollow metallic rectangular pipes and infrared fibers are proposed as alternate media for collection, guidance and manipulation of mid-infrared tunable diode laser (TDL) radiation. Certain features of such media are found to be useful for control of TDL far field patterns, polarization and possibly intensity fluctuations. Such improvement in dimension compatibility may eventually lead to laser heterodyne spectroscopy (LHS) and optical communication system compaction and integration. Infrared optical fiber and the compound parabolic coupling of light into a hollow pipe waveguide are discussed as well as the design of the waveguide.

  14. Measuring Atmospheric Abundances and Rotation of a Brown Dwarf with a Measured Mass and Radius

    NASA Astrophysics Data System (ADS)

    Birkby, Jayne

    2015-08-01

    There are no cool brown dwarfs with both a well-characterized atmosphere and a measured mass and radius. LHS 6343, a brown dwarf transiting one member of an M+M binary in the Kepler field, provides the first opportunity to tie theoretical atmospheric models to the observed brown dwarf mass-radius diagram. We propose four half-nights of observations with NIRSPAO in 2015B to measure spectral features in LHS 6343 C by detecting the relative motions of absorption features during the system's orbit. In addition to abundances, we will directly measure the brown dwarf's projected rotational velocity and mass.

  15. Development of the Learning Health System Researcher Core Competencies.

    PubMed

    Forrest, Christopher B; Chesley, Francis D; Tregear, Michelle L; Mistry, Kamila B

    2017-08-04

    To develop core competencies for learning health system (LHS) researchers to guide the development of training programs. Data were obtained from literature review, expert interviews, a modified Delphi process, and consensus development meetings. The competencies were developed from August to December 2016 using qualitative methods. The literature review formed the basis for the initial draft of a competency domain framework. Key informant semi-structured interviews, a modified Delphi survey, and three expert panel (n = 19 members) consensus development meetings produced the final set of competencies. The iterative development process yielded seven competency domains: (1) systems science; (2) research questions and standards of scientific evidence; (3) research methods; (4) informatics; (5) ethics of research and implementation in health systems; (6) improvement and implementation science; and (7) engagement, leadership, and research management. A total of 33 core competencies were prioritized across these seven domains. The real-world milieu of LHS research, the embeddedness of the researcher within the health system, and engagement of stakeholders are distinguishing characteristics of this emerging field. The LHS researcher core competencies can be used to guide the development of learning objectives, evaluation methods, and curricula for training programs. © Health Research and Educational Trust.

  16. Carbon dot-Au(i)Ag(0) assembly for the construction of an artificial light harvesting system.

    PubMed

    Jana, Jayasmita; Aditya, Teresa; Pal, Tarasankar

    2018-03-06

    Artificial light harvesting systems (LHS) with inorganic counterparts are considered to be robust as well as mechanistically simple, where the system follows the donor-acceptor principle with an unchanged structural pattern. Plasmonic gold or silver nanoparticles are mostly chosen as inorganic counterparts to design artificial LHS. To capitalize on its electron accepting capability, Au(i) has been considered in this work for the synergistic stabilization of a system with intriguingly fluorescing silver(0) clusters produced in situ. Thus a stable fluorescent Au(i)Ag(0) assembly is generated with electron accepting capabilities. On the other hand, carbon dots have evolved as new fluorescent probes due to their unique physicochemical properties. Utilizing the simple electronic behavior of carbon dots, an electronic interaction between the fluorescent Au(i)Ag(0) and a carbon dot has been investigated for the construction of a new artificial light harvesting system. This coinage metal assembly allows surface energy transfer where it acts as an acceptor, while the carbon dot behaves as a good donor. The energy transfer efficiency has been calculated experimentally to be significant (81.3%) and the Au(i)Ag(0)-carbon dot assembly paves the way for efficient artificial LHS.

  17. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  18. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    NASA Astrophysics Data System (ADS)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  19. Implementation and Performance Analysis of Parallel Assignment Algorithms on a Hypercube Computer.

    DTIC Science & Technology

    1987-12-01

    coupled pro- cessors because of the degree of interaction between processors imposed by the global memory [HwB84]. Another sub-class of MIMD... interaction between the individual processors [MuA87]. Many of the commercial MIMD computers available today are loosely coupled [HwB84]. 2.1.3 The Hypercube...Alpha-beta is a method usually employed in the solution of two-person zero-sum games like chess and checkers [Qui87]. The ha sic approach of the alpha

  20. Stochastic Investigation of Natural Frequency for Functionally Graded Plates

    NASA Astrophysics Data System (ADS)

    Karsh, P. K.; Mukhopadhyay, T.; Dey, S.

    2018-03-01

    This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.

  1. Tunable laser heterodyne spectrometer measurements of atmospheric species

    NASA Technical Reports Server (NTRS)

    Allario, F.; Katzberg, S. J.; Hoell, J. M.

    1983-01-01

    It is pointed out that spectroscopic measurements conducted with the aid of tunable laser heterodyne spectrometers in the 3-30 micron range of the electromagnetic spectrum have the potential to measure the vertical profiles of tenuous gas molecules in the atmosphere with ultra high spectral resolution and great sensitivity. Programs related to the realization of this potential have been conducted for some time, and a Laser Heterodyne Spectrometer (LHS) experiment was developed. The present investigation has the objective to provide an overview of the LHS concept for measuring the vertical profiles of tenuous gas molecules in the upper atmosphere from space and airborne platforms, and to discuss the sensitivity ranges for this technique.

  2. Advancing Continuous Predictive Analytics Monitoring: Moving from Implementation to Clinical Action in a Learning Health System.

    PubMed

    Keim-Malpass, Jessica; Kitzmiller, Rebecca R; Skeeles-Worley, Angela; Lindberg, Curt; Clark, Matthew T; Tai, Robert; Calland, James Forrest; Sullivan, Kevin; Randall Moorman, J; Anderson, Ruth A

    2018-06-01

    In the intensive care unit, clinicians monitor a diverse array of data inputs to detect early signs of impending clinical demise or improvement. Continuous predictive analytics monitoring synthesizes data from a variety of inputs into a risk estimate that clinicians can observe in a streaming environment. For this to be useful, clinicians must engage with the data in a way that makes sense for their clinical workflow in the context of a learning health system (LHS). This article describes the processes needed to evoke clinical action after initiation of continuous predictive analytics monitoring in an LHS. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. An experiment concept to measure stratospheric trace constituents by laser heterodyne spectroscopy

    NASA Technical Reports Server (NTRS)

    Allario, F.; Hoell, J. M., Jr.; Katzberg, S. J.; Larsen, J. C.

    1980-01-01

    Laser heterodyne spectroscopy (LHS) techniques were used to measure radical gases from Spacelab. Major emphasis was placed on the measurement of ClO, ClOnO2, HO2, H2O2, N2O5, and HOCl in solar occultation with vertical resolution less than or equal to 2-km and vertical range from 1O to 70-km. Sensitivity analyses were performed on ClO and O3 to determine design criteria for the LHS instrument. Results indicate that O3 and ClO vertical profiles can be measured with an accuracy more than or equal to 95% and more than or equal to 80%, respectively, over the total profile.

  4. Range-wide assessment of livestock grazing across the sagebrush biome

    USGS Publications Warehouse

    Veblen, Kari E.; Pyke, David A.; Jones, Christopher A.; Casazza, Michael L.; Assal, Timothy J.; Farinha, Melissa A.

    2011-01-01

    Domestic livestock grazing occurs in virtually all sagebrush habitats and is a prominent disturbance factor. By affecting habitat condition and trend, grazing influences the resources required by, and thus, the distribution and abundance of sagebrush-obligate wildlife species (for example, sage-grouse Centrocercus spp.). Yet, the risks that livestock grazing may pose to these species and their habitats are not always clear. Although livestock grazing intensity and associated habitat condition may be known in many places at the local level, we have not yet been able to answer questions about use, condition, and trend at the landscape scale or at the range-wide scale for wildlife species. A great deal of information about grazing use, management regimes, and ecological condition exists at the local level (for individual livestock management units) under the oversight of organizations such as the Bureau of Land Management (BLM). However, the extent, quality, and types of existing data are unknown, which hinders the compilation, mapping, or analysis of these data. Once compiled, these data may be helpful for drawing conclusions about rangeland status, and we may be able to identify relationships between those data and wildlife habitat at the landscape scale. The overall objective of our study was to perform a range-wide assessment of livestock grazing effects (and the relevant supporting data) in sagebrush ecosystems managed by the BLM. Our assessments and analyses focused primarily on local-level management and data collected at the scale of BLM grazing allotments (that is, individual livestock management units). Specific objectives included the following: 1. Identify and refine existing range-wide datasets to be used for analyses of livestock grazing effects on sagebrush ecosystems. 2. Assess the extent, quality, and types of livestock grazing-related natural resource data collected by BLM range-wide (i.e., across allotments, districts and regions). 3. Compile and synthesize recommendations from federal and university rangeland science experts about how BLM might prioritize collection of different types of livestock grazing-related natural resource data. 4. Investigate whether range-wide datasets (Objective 1) could be used in conjunction with remotely sensed imagery to identify across broad scales (a) allotments potentially not meeting BLM Land Health Standards (LHS) and (b) allotments in which unmet standards might be attributable to livestock grazing. Objective 1: We identified four datasets that potentially could be used for analyses of livestock grazing effects on sagebrush ecosystems. First, we obtained the most current spatial data (typically up to 2007, 2008, or 2009) for all BLM allotments and compiled data into a coarse, topologically enforced dataset that delineated grazing allotment boundaries. Second, we obtained LHS evaluation data (as of 2007) for all allotments across all districts and regions; these data included date of most recent evaluation, BLM determinations of whether region-specific standards were met, and whether BLM deemed livestock to have contributed to any unmet standards. Third, we examined grazing records of three types: Actual Use (permittee-reported), Billed Use (BLM-reported), and Permitted Use (legally authorized). Finally, we explored the possibility of using existing Natural Resources Conservation Service (NRCS) Ecological Site Description (ESD) data to make up-to-date estimates of production and forage availability on BLM allotments. Objective 2: We investigated the availability of BLM livestock grazing-related monitoring data and the status of LHS across 310 randomly selected allotments in 13 BLM field offices. We found that, relative to other data types, the most commonly available monitoring data were Actual Use numbers (permittee-reported livestock numbers and season-of-use), followed by Photo Point, forage Utilization, and finally, Vegetation Trend measurement data. Data availability and frequency of data collection varied across allotments and field offices. Analysis of the BLM's LHS data indicated 67 percent of allotments analyzed were meeting standards. For those not meeting standards, livestock were considered the causal factor in 45 percent of cases (about 15 percent of all allotments). Objective 3: We sought input from 42 university and federal rangeland science experts about how best to prioritize rangeland monitoring activities associated with ascertaining livestock impacts on vegetation resources. When we presented a hypothetical scenario to these scientists and asked them to prioritize monitoring activities, the most common response was to measure ground and vegetation cover, a variable that in many cases (10 of 13 field offices sampled) BLM had already identified as a monitoring priority. Experts identified several other traditional (for example, photo points) and emerging approaches (for example, high-resolution aerial photography) to monitoring. Objective 4: We used spatial allotment data (described in Objective 1) and remotely sensed vegetation data (sagebrush cover, herbaceous vegetation cover, litter and bare soil) to assess differences in allotment LHS status ("Not met" vs. "Met"; if "Not met" - livestock-caused vs. not). We then developed logistic regression models, using vegetation variables to predict LHS status of BLM allotments in sagebrush steppe habitats in Wyoming and portions of Montana and Colorado. In general, we found that more consistent data collection at the local level might improve suitability of data for broad-scale analyses of livestock impacts. As is, data collection methodologies varied across field offices and States, and we did not find any local-level monitoring data (Actual Use, Utilization, Vegetation Trend) that had been collected consistently enough over time or space for range-wide analyses. Moreover, continued and improved emphasis on monitoring also may aid local management decisions, particularly with respect to effects of livestock grazing. Rangeland science experts identified ground cover as a high monitoring priority for assessing range condition and emphasized the importance of tracking livestock numbers and grazing dates. Ultimately, the most effective monitoring program may entail both increased data collection effort and the integration of alternative monitoring approaches (for example, remote sensing or monitoring teams). In the course of our study, we identified three additional datasets that could potentially be used for range-wide analyses: spatial allotment boundary data for all BLM allotments range-wide, LHS evaluations of BLM allotments, and livestock use data (livestock numbers and grazing dates). It may be possible to use these spatial datasets to help prioritize monitoring activities over the extensive land areas managed by BLM. We present an example of how we used spatial allotment boundary data and LHS data to test whether remotely sensed vegetation characteristics could be used to predict which allotments met or did not meet LHS. This approach may be further improved by the results of current efforts by BLM to test whether more intensive (higher resolution) LHS assessments more accurately describe land health status. Standardized data collection in more ecologically meaningful land units may improve our ability to use local-level data for broad-scale analyses.

  5. Thermobaric structure of the Himalayan Metamorphic Belt in Kaghan Valley, Pakistan

    NASA Astrophysics Data System (ADS)

    Rehman, Hafiz Ur; Yamamoto, Hiroshi; Kaneko, Yoshiyuki; Kausar, Allah Bakhsh; Murata, Mamoru; Ozawa, Hiroaki

    2007-02-01

    The thermobaric structure of the Himalayan Metamorphic Belt (HMB) has been constructed along the Kaghan Valley transect, Pakistan. The HMB in this valley represents mainly the Lesser Himalayan Sequence (LHS) and Higher Himalayan Crystallines (HHC). Mineral parageneses of 474 samples, from an approximately, 80-km traverse from southwest to northeast, were examined. Microprobe analyses were carried out to quantify the mineral composition. To determine the pressure-temperature (P-T) conditions, 65 thin sections (7 pelites from LHS and 25 pelites, 9 mafic rocks/amphibolites and 19 eclogites from HHC) were selected. Based on field observations and mineral paragenesis, low-grade to high-grade metapelites, show Barrovian-type progressive metamorphic sequence, with chlorite, biotite, garnet and staurolite zones in LHS and staurolite, kyanite and sillimanite zones in HHC. By using well-calibrated geothermobarometers, P-T conditions for pelitic and mafic rocks are estimated. P-T estimates for pelitic rocks from the garnet zone indicate a condition of 534 ± 17 °C at 7.6 ± 1.2 kbar. P-T estimates for rocks from the staurolite and kyanite zones indicate average conditions of 526 ± 17 °C at 9.4 ± 1.2 kbar and 657 ± 54 °C at 10 ± 1.6 kbar, respectively. P-T conditions for mafic rocks (amphibolites) and eclogites from HHC are estimated as 645 ± 54 °C at 10.3 ± 2 kbar and 746 ± 59 °C at 15.5 ± 2.1 kbar, respectively. The coesite-bearing ultrahigh-pressure (UHP) eclogites record a peak P-T condition of 757-786 °C at 28.6 ± 0.4 kbar and retrograde P-T conditions of 825 ± 59 °C at 18.1 ± 1.7 kbar. These results suggest that HMB show a gradual increase in metamorphic grade from southwest to northeast. The P-T conditions from Pelitic and adjacent mafic rocks having identical peak conditions in the same metamorphic zone, while the structural middle in HHC reached the highest P-T condition upto the UHP grade.

  6. Multiple microbial cell-free extracts improve the microbiological, biochemical and sensory features of ewes' milk cheese.

    PubMed

    Calasso, Maria; Mancini, Leonardo; De Angelis, Maria; Conte, Amalia; Costa, Cristina; Del Nobile, Matteo Alessandro; Gobbetti, Marco

    2017-09-01

    This study used cell-free enzyme (CFE) extracts from Lactobacillus casei, Hafnia alvei, Debaryomyces hansenii and Saccharomyces cerevisiae to condition or accelerate Pecorino-type cheese ripening. Compositional, microbiological, and biochemical analyses were performed, and volatile and sensory profiles were obtained. Lactobacilli and cocci increased during ripening, especially in cheeses containing CFE from L. casei, H. alvei and D. hansenii (LHD-C) and L. casei, H. alvei and S. cerevisiae (LHS-C). Compared to control cheese (CC), several enzymatic activities were higher (P < 0.05) in CFE-supplemented cheeses. Compared to the CC (1907 mg kg -1 of cheese), the free amino acid level increased (P < 0.05) in CFE-supplemented cheeses, ranging from approximately 2575 (LHS-C) to 5720 (LHD-C) mg kg -1 of cheese after 60 days of CFE-supplemented ripening. As shown by GC/MS analysis, the levels of several volatile organic compounds were significantly (P < 0.05) lower in CC than in CFE-supplemented cheeses. All cheeses manufactured by adding multiple CFEs exhibited higher scores (P < 0.05) for internal structure, acid taste and juiciness than CC samples. This study shows the possibility of producing ewes' milk cheese with standardized characteristics and improved flavor intensity in a relatively short time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Determinants of brain metabolism changes in mesial temporal lobe epilepsy.

    PubMed

    Chassoux, Francine; Artiges, Eric; Semah, Franck; Desarnaud, Serge; Laurent, Agathe; Landre, Elisabeth; Gervais, Philippe; Devaux, Bertrand; Helal, Ourkia Badia

    2016-06-01

    To determine the main factors influencing metabolic changes in mesial temporal lobe epilepsy (MTLE) due to hippocampal sclerosis (HS). We prospectively studied 114 patients with MTLE (62 female; 60 left HS; 15- to 56-year-olds) with (18) F-fluorodeoxyglucose-positron emission tomography and correlated the results with the side of HS, structural atrophy, electroclinical features, gender, age at onset, epilepsy duration, and seizure frequency. Imaging processing was performed using statistical parametric mapping. Ipsilateral hypometabolism involved temporal (mesial structures, pole, and lateral cortex) and extratemporal areas including the insula, frontal lobe, perisylvian regions, and thalamus, more extensively in right HS (RHS). A relative increase of metabolism (hypermetabolism) was found in the nonepileptic temporal lobe and in posterior areas bilaterally. Voxel-based morphometry detected unilateral hippocampus atrophy and gray matter concentration decrease in both frontal lobes, more extensively in left HS (LHS). Regardless of the structural alterations, the topography of hypometabolism correlated strongly with the extent of epileptic networks (mesial, anterior-mesiolateral, widespread mesiolateral, and bitemporal according to the ictal spread), which were larger in RHS. Notably, widespread perisylvian and bitemporal hypometabolism was found only in RHS. Mirror hypermetabolism was grossly proportional to the hypometabolic areas, coinciding partly with the default mode network. Gender-related effect was significant mainly in the contralateral frontal lobe, in which metabolism was higher in female patients. Epilepsy duration correlated with the contralateral temporal metabolism, positively in LHS and negatively in RHS. Opposite results were found with age at onset. High seizure frequency correlated negatively with the contralateral metabolism in LHS. Epileptic networks, as assessed by electroclinical correlations, appear to be the main determinant of hypometabolism in MTLE. Compensatory mechanisms reflected by a relative hypermetabolism in the nonepileptic temporal lobe and in extratemporal areas seem more efficient in LHS and in female patients, whereas long duration, late onset of epilepsy, and high seizure frequency may reduce these adaptive changes. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  8. Airway hyperresponsiveness in chronic obstructive pulmonary disease: A marker of asthma-chronic obstructive pulmonary disease overlap syndrome?

    PubMed

    Tkacova, Ruzena; Dai, Darlene L Y; Vonk, Judith M; Leung, Janice M; Hiemstra, Pieter S; van den Berge, Maarten; Kunz, Lisette; Hollander, Zsuzsanna; Tashkin, Donald; Wise, Robert; Connett, John; Ng, Raymond; McManus, Bruce; Paul Man, S F; Postma, Dirkje S; Sin, Don D

    2016-12-01

    The impact of airway hyperreactivity (AHR) on respiratory mortality and systemic inflammation among patients with chronic obstructive pulmonary disease (COPD) is largely unknown. We used data from 2 large studies to determine the relationship between AHR and FEV 1 decline, respiratory mortality, and systemic inflammation. We sought to determine the relationship of AHR with FEV 1 decline, respiratory mortality, and systemic inflammatory burden in patients with COPD in the Lung Health Study (LHS) and the Groningen Leiden Universities Corticosteroids in Obstructive Lung Disease (GLUCOLD) study. The LHS enrolled current smokers with mild-to-moderate COPD (n = 5887), and the GLUCOLD study enrolled former and current smokers with moderate-to-severe COPD (n = 51). For the primary analysis, we defined AHR by a methacholine provocation concentration of 4 mg/mL or less, which led to a 20% reduction in FEV 1 (PC 20 ). The primary outcomes were FEV 1 decline, respiratory mortality, and biomarkers of systemic inflammation. Approximately 24% of LHS participants had AHR. Compared with patients without AHR, patients with AHR had a 2-fold increased risk of respiratory mortality (hazard ratio, 2.38; 95% CI, 1.38-4.11; P = .002) and experienced an accelerated FEV 1 decline by 13.2 mL/y in the LHS (P = .007) and by 12.4 mL/y in the much smaller GLUCOLD study (P = .079). Patients with AHR had generally reduced burden of systemic inflammatory biomarkers than did those without AHR. AHR is common in patients with mild-to-moderate COPD, affecting 1 in 4 patients and identifies a distinct subset of patients who have increased risk of disease progression and mortality. AHR may represent a spectrum of the asthma-COPD overlap phenotype that urgently requires disease modification. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  9. Antibiotic prophylaxis for hysterectomy, a prospective cohort study: cefuroxime, metronidazole, or both?

    PubMed

    Brummer, T H I; Heikkinen, A-M; Jalkanen, J; Fraser, J; Mäkinen, J; Tomás, E; Seppälä, T; Sjöberg, J; Härkki, P

    2013-09-01

    To evaluate cefuroxime and metronidazole antibiotic prophylaxis. Observational nonrandomised 1-year prospective cohort study. Fifty-three hospitals in Finland. A total of 5279 women undergoing hysterectomy for benign indications, with cefuroxime given to 4301 and metronidazole given to 2855. Excluding other antibiotics, cefuroxime alone was given to 2019, metronidazole alone was given to 518, and they were administered in combination to 2252 women. Data on 1115 abdominal hysterectomies (AHs), 1541 laparoscopic hysterectomies (LHs), and 2133 vaginal hysterectomies (VHs) were analysed using logistic regression adjusted for confounding factors. Postoperative infections. Cefuroxime had a risk-reductive effect for total infections (adjusted odds ratio, OR, 0.29; 95% confidence interval, 95% CI, 0.22-0.39), but the independent effect of metronidazole and the interaction effect of cefuroxime and metronidazole were nonsignificant. In subgroup analyses of AHs, LHs, and VHs involving those receiving the two main antibiotics only, the effect of cefuroxime alone nonsignificantly differed from that of cefuroxime and metronidazole in combination for all types of infection. The absence of cefuroxime, assessed by comparing metronidazole alone with cefuroxime and metronidazole in combination, led to an increased risk for total infections in AHs (adjusted OR 3.63; 95% CI 1.99-6.65), in LHs (OR 3.53; 95% CI 1.74-7.18), and in VHs (OR 4.05; 95% CI 2.30-7.13), and also increased risks for febrile events in all categories (AHs, OR 2.86; 95% CI 1.09-7.46; LHs, OR 13.19; 95% CI 3.66-47.49; VHs, OR 12.74; 95% CI 3.01-53.95), wound infections in AHs (OR 6.88; 95% CI 1.09-7.49), and pelvic infections in VHs (OR 4.26; 95% CI 1.76-10.31). In this study, cefuroxime appeared to be effective in prophylaxis against infections. Metronidazole appeared to be ineffective, with no additional risk-reductive effect when combined with cefuroxime. © 2013 RCOG.

  10. Very-short-term perioperative intravenous iron administration and postoperative outcome in major orthopedic surgery: a pooled analysis of observational data from 2547 patients.

    PubMed

    Muñoz, Manuel; Gómez-Ramírez, Susana; Cuenca, Jorge; García-Erce, José Antonio; Iglesias-Aparicio, Daniel; Haman-Alcober, Sami; Ariza, Daniel; Naveira, Enrique

    2014-02-01

    Postoperative nosocomial infection (PNI) is a severe complication in surgical patients. Known risk factors of PNI such as allogeneic blood transfusions (ABTs), anemia, and iron deficiency are manageable with perioperative intravenous (IV) iron therapy. To address potential concerns about IV iron and the risk of PNI, we studied a large series of orthopedic surgical patients for possible relations between IV iron, ABT, and PNI. Pooled data on ABT, PNI, 30-day mortality, and length of hospital stay (LHS) from 2547 patients undergoing elective lower-limb arthroplasty (n = 1186) or hip fracture repair (n = 1361) were compared between patients who received either very-short-term perioperative IV iron (200-600 mg; n = 1538), with or without recombinant human erythropoietin (rHuEPO; 40,000 IU), or standard treatment (n = 1009). Compared to standard therapy, perioperative IV iron reduced rates of ABT (32.4% vs. 48.8%; p = 0.001), PNI (10.7% vs. 26.9%; p = 0.001), and 30-day mortality (4.8% vs. 9.4%; p = 0.003) and the LHS (11.9 days vs. 13.4 days; p = 0.001) in hip fracture patients. These benefits were observed in both transfused and nontransfused patients. Also in elective arthroplasty, IV iron reduced ABT rates (8.9% vs. 30.1%; p = 0.001) and LHS (8.4 days vs.10.7 days; p = 0.001), without differences in PNI rates (2.8% vs. 3.7%; p = 0.417), and there was no 30-day mortality. Despite known limitations of pooled observational analyses, these results suggest that very-short-term perioperative administration of IV iron, with or without rHuEPO, in major lower limb orthopedic procedures is associated with reduced ABT rates and LHS, without increasing postoperative morbidity or mortality. © 2013 American Association of Blood Banks.

  11. Thrusting and back-thrusting as post-emplacement kinematics of the Almora klippe: Insights from Low-temperature thermochronology

    NASA Astrophysics Data System (ADS)

    Patel, R. C.; Singh, Paramjeet; Lal, Nand

    2015-06-01

    Crystalline klippen over the Lesser Himalayan Sequence (LHS) in the Kumaon and Garhwal regions of NW-Himalaya, are the representative of southern portion of the Main Central Thrust (MCT) hanging wall. These were tectonically transported over the juxtaposed thrust sheets (Berinag, Tons and Ramgarh) of the LHS zone along the MCT. These klippen comprise of NW-SE trending synformal folded thrust sheet bounded by thrusts in the south and north. In the present study, the exhumation histories of two well-known klippen namely Almora and Baijnath, and the Ramgarh thrust sheet, in the Kumaon and Garhwal regions vis-a-vis Himalayan orogeny have been investigated using Apatite Fission Track (AFT) ages. Along a ~ 60 km long orogen perpendicular transect across the Almora klippe and the Ramgarh thrust sheet, 16 AFT cooling ages from the Almora klippe and 2 from the Ramgarh thrust sheet have been found to range from 3.7 ± 0.8 to 13.2 ± 2.7 Ma, and 6.3 ± 0.8 to 7.2 ± 1.0 Ma respectively. From LHS meta-sedimentary rocks only a single AFT age of 3.6 ± 0.8 Ma could be obtained. Three AFT ages from the Baijnath klippe range between 4.7 ± 0.5 and 6.6 ± 0.8 Ma. AFT ages and exhumation rates of different klippen show a dynamic coupling between tectonic and erosion processes in the Kumaon and Garhwal regions of NW-Himalaya. However, the tectonic processes play a dominant role in controlling the exhumation. Thrusting and back thrusting within the Almora klippe and Ramgarh thrust sheet are the post-emplacement kinematics that controlled the exhumation of the Almora klippe. Combining these results with the already published AFT ages from the crystalline klippen and the Higher Himalayan Crystalline (HHC), the kinematics of emplacement of the klippen over the LHS and exhumation pattern across the MCT in the Kumaon and Garhwal regions of NW-Himalaya have been investigated.

  12. Gingival leiomyomatous hamartoma of the maxilla: a rare entity

    PubMed Central

    Raghunath, Vandana; Manjunatha, Bhari Sharanesha; Al-Thobaiti, Yasser

    2016-01-01

    Hamartoma is a tumour-like malformation appearing as a focal overgrowth of normal cells. Leiomyomatous hamartomas (LHs) are rare in the oral cavity and commonly seen in the Japanese and less than 40 cases have been reported in the Japanese and English literature. The clinical differential diagnoses are irritational (traumatic) fibroma and congenital epulis. It has to be differentiated histopathologically from its neoplastic counterparts and mesenchymomas. Hence, we report such a case of LHs, which presented as a sessile gingival growth occurring in the midline in a 15-year-old girl. The final diagnosis was based on the histopathological appearance which was confirmed by immunohistochemical staining of various markers. A review of the literature of previous cases was also carried out. PMID:27161203

  13. Gingival leiomyomatous hamartoma of the maxilla: a rare entity.

    PubMed

    Raghunath, Vandana; Manjunatha, Bhari Sharanesha; Al-Thobaiti, Yasser

    2016-05-09

    Hamartoma is a tumour-like malformation appearing as a focal overgrowth of normal cells. Leiomyomatous hamartomas (LHs) are rare in the oral cavity and commonly seen in the Japanese and less than 40 cases have been reported in the Japanese and English literature. The clinical differential diagnoses are irritational (traumatic) fibroma and congenital epulis. It has to be differentiated histopathologically from its neoplastic counterparts and mesenchymomas. Hence, we report such a case of LHs, which presented as a sessile gingival growth occurring in the midline in a 15-year-old girl. The final diagnosis was based on the histopathological appearance which was confirmed by immunohistochemical staining of various markers. A review of the literature of previous cases was also carried out. 2016 BMJ Publishing Group Ltd.

  14. Energies and radial distributions of Bs mesons - the effect of hypercubic blocking

    NASA Astrophysics Data System (ADS)

    Koponen, Jonna

    2006-12-01

    This is a follow-up to our earlier work for the energies and the charge (vector) and matter (scalar) distributions for S-wave states in a heavy-light meson, where the heavy quark is static and the light quark has a mass about that of the strange quark. We study the radial distributions of higher angular momentum states, namely P- and D-wave states, using a "fuzzy" static quark. A new improvement is the use of hypercubic blocking in the time direction, which effectively constrains the heavy quark to move within a 2a hypercube (a is the lattice spacing). The calculation is carried out with dynamical fermions on a 163 × 32 lattice with a ≈ 0.10 fm generated using the non-perturbatively improved clover action. The configurations were gener- ated by the UKQCD Collaboration using lattice action parameters β = 5.2, c SW = 2.0171 and κ = 0.1350. In nature the closest equivalent of this heavy-light system is the Bs meson. Attempts are now being made to understand these results in terms of the Dirac equation.

  15. Influenza A virus in swine breeding herds: Combination of vaccination and biosecurity practices can reduce likelihood of endemic piglet reservoir.

    PubMed

    White, L A; Torremorell, M; Craft, M E

    2017-03-01

    Recent modelling and empirical work on influenza A virus (IAV) suggests that piglets play an important role as an endemic reservoir. The objective of this study is to test intervention strategies aimed at reducing the incidence of IAV in piglets and ideally, preventing piglets from becoming exposed in the first place. These interventions include biosecurity measures, vaccination, and management options that swine producers may employ individually or jointly to control IAV in their herds. We have developed a stochastic Susceptible-Exposed-Infectious-Recovered-Vaccinated (SEIRV) model that reflects the spatial organization of a standard breeding herd and accounts for the different production classes of pigs therein. Notably, this model allows for loss of immunity for vaccinated and recovered animals, and for vaccinated animals to have different latency and infectious periods from unvaccinated animals as suggested by the literature. The interventions tested include: (1) varied timing of gilt introductions to the breeding herd, (2) gilt separation (no indirect transmission to or from the gilt development unit), (3) gilt vaccination upon arrival to the farm, (4) early weaning, and (5) vaccination strategies of sows with different timing (mass and pre-farrow) and efficacy (homologous vs. heterologous). We conducted a Latin Hypercube Sampling and Partial Rank Correlation Coefficient (LHS-PRCC) analysis combined with a random forest analysis to assess the relative importance of each epidemiological parameter in determining epidemic outcomes. In concert, mass vaccination, early weaning of piglets (removal 0-7days after birth), gilt separation, gilt vaccination, and longer periods between introductions of gilts (6 months) were the most effective at reducing prevalence. Endemic prevalence overall was reduced by 51% relative to the null case; endemic prevalence in piglets was reduced by 74%; and IAV was eliminated completely from the herd in 23% of all simulations. Importantly, elimination of IAV was most likely to occur within the first few days of an epidemic. The latency period, infectious period, duration of immunity, and transmission rate for piglets with maternal immunity had the highest correlation with three separate measures of IAV prevalence; therefore, these are parameters that warrant increased attention for obtaining empirical estimates. Our findings support other studies suggesting that piglets play a key role in maintaining IAV in breeding herds. We recommend biosecurity measures in combination with targeted homologous vaccination or vaccines that provide wider cross-protective immunity to prevent incursions of virus to the farm and subsequent establishment of an infected piglet reservoir. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.

    PubMed

    Barhen, J; Toomarian, N; Protopopescu, V

    1987-12-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  17. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Toomarian, N.; Protopopescu, V.

    1987-01-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  18. Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays

    NASA Technical Reports Server (NTRS)

    Reynolds, R. G.; Markley, F. Landis

    2001-01-01

    Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum which the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical implementation strategies for specific wheel configurations are also considered.

  19. The Discovery Outburst of the X-Ray Transient IGR J17497-2821 Observed with RXTE and ATCA

    NASA Technical Reports Server (NTRS)

    Rodriquez, Jerome; Bel, Marion Cadolle; Tomsick, John A.; Corbel, Stephane; Brocksopp, Catherine; Paizis, Ada; Shaw, Simon E.; Bodaghee, Arash

    2007-01-01

    We report the results of a series of RXTE and ATCA observations of the recently discovered X-ray transient IGR J17497-2821. Our 3-200 keV PCA+HEXTE spectral analysis shows very little variations over a period of approx.10 days around the maximum of the outburst. IGR J17497-2821 is found in a typical low-hard state (LHS) of X-ray binaries (XRBs), well represented by an absorbed Comptonized spectrum with an iron edge at about 7 keV. The high value of the absorption (approx.4 x 10(exp 22/sq cm suggests that the source is located at a large distance, either close to the Galactic center or beyond. The timing analysis shows no particular features, while the shape of the power density spectra is also typical of the LHS of XRBs, with apprrox.36% rms variability. No radio counterpart is found down to a limit of 0.21 mJy at 4.80 and 8.64 GHz. Although the position of IGR J17497-2821 in the radio to X-ray flux diagram is well below the correlation usually observed in the LHS of black holes, the comparison of its X-ray properties with those of other sources leads us to suggest that it is a black hole candidate.

  20. Properties of Blazar Jets Defined by an Economy of Power

    NASA Astrophysics Data System (ADS)

    Petropoulou, Maria; Dermer, Charles D.

    2016-07-01

    The absolute power of a relativistic black hole jet includes the power in the magnetic field, the leptons, the hadrons, and the radiated photons. A power analysis of a relativistic radio/γ-ray blazar jet leads to bifurcated leptonic synchrotron-Compton (LSC) and leptohadronic synchrotron (LHS) solutions that minimize the total jet power. Higher Doppler factors with increasing peak synchrotron frequency are implied in the LSC model. Strong magnetic fields {B}\\prime ≳ 100 {{G}} are found for the LHS model with variability times ≲ {10}3 {{s}}, in accord with highly magnetized, reconnection-driven jet models. Proton synchrotron models of ≳ 100 {GeV} blazar radiation can have sub-Eddington absolute jet powers, but models of dominant GeV radiation in flat spectrum radio quasars require excessive power.

  1. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  2. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    PubMed

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  3. A Local Approximation of Fundamental Measure Theory Incorporated into Three Dimensional Poisson-Nernst-Planck Equations to Account for Hard Sphere Repulsion Among Ions

    NASA Astrophysics Data System (ADS)

    Qiao, Yu; Liu, Xuejiao; Chen, Minxin; Lu, Benzhuo

    2016-04-01

    The hard sphere repulsion among ions can be considered in the Poisson-Nernst-Planck (PNP) equations by combining the fundamental measure theory (FMT). To reduce the nonlocal computational complexity in 3D simulation of biological systems, a local approximation of FMT is derived, which forms a local hard sphere PNP (LHSPNP) model. In the derivation, the excess chemical potential from hard sphere repulsion is obtained with the FMT and has six integration components. For the integrands and weighted densities in each component, Taylor expansions are performed and the lowest order approximations are taken, which result in the final local hard sphere (LHS) excess chemical potential with four components. By plugging the LHS excess chemical potential into the ionic flux expression in the Nernst-Planck equation, the three dimensional LHSPNP is obtained. It is interestingly found that the essential part of free energy term of the previous size modified model (Borukhov et al. in Phys Rev Lett 79:435-438, 1997; Kilic et al. in Phys Rev E 75:021502, 2007; Lu and Zhou in Biophys J 100:2475-2485, 2011; Liu and Eisenberg in J Chem Phys 141:22D532, 2014) has a very similar form to one term of the LHS model, but LHSPNP has more additional terms accounting for size effects. Equation of state for one component homogeneous fluid is studied for the local hard sphere approximation of FMT and is proved to be exact for the first two virial coefficients, while the previous size modified model only presents the first virial coefficient accurately. To investigate the effects of LHS model and the competitions among different counterion species, numerical experiments are performed for the traditional PNP model, the LHSPNP model, the previous size modified PNP (SMPNP) model and the Monte Carlo simulation. It's observed that in steady state the LHSPNP results are quite different from the PNP results, but are close to the SMPNP results under a wide range of boundary conditions. Besides, in both LHSPNP and SMPNP models the stratification of one counterion species can be observed under certain bulk concentrations.

  4. Patient satisfaction with health-care professionals and structure is not affected by longer hospital stay and complications after lung resection: a case-matched analysis.

    PubMed

    Pompili, Cecilia; Tiberi, Michela; Salati, Michele; Refai, Majed; Xiumé, Francesco; Brunelli, Alessandro

    2015-02-01

    The objective of this investigation was to assess satisfaction with care of patients with long hospital stay (LHS) or complications after pulmonary resection in comparison with case-matched counterparts with a regular postoperative course. This is a prospective observational analysis on 171 consecutive patients submitted to pulmonary resections (78 wedges, 8 segmentectomies, 83 lobectomies, 3 pneumonectomies) for benign (35), primary (93) or secondary malignant (43) diseases. A hospital stay >7 days was defined as long (LHS). Major cardiopulmonary complications were defined according to the ESTS database. Patient satisfaction was assessed by the administration of the EORTC IN-PATSAT32 module at discharge. The questionnaire is a 32-item self-administered survey including different scales, reflecting the perceived level of satisfaction about the care provided by doctors, nurses and other personnel. To minimize selection bias, propensity score case-matching technique was applied to generate two sets of matched patients: patients with LHS with counterparts without it; patients with complications with counterparts without it. Median length of postoperative stay was 4 days (range 2-43). Forty-one patients (24%) had a hospital stay>7 days and 21 developed cardiopulmonary complications (12%). Propensity score yielded two well-matched groups of 41 patients with and without LHS. There were no significant differences in any patient satisfaction scale between the two groups. The comparison of the results of the patient satisfaction questionnaire between the two matched groups of 21 patients with and without complications did not show significant differences in any scale. Patients experiencing poor outcomes such as long hospital stay or complications have similar perception of quality of care compared with those with regular outcomes. Patient-reported outcome measures are becoming increasingly important in the evaluation of the quality of care and may complement more traditional objective indicators such as morbidity or length of stay. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  5. Performance of a plasma fluid code on the Intel parallel computers

    NASA Technical Reports Server (NTRS)

    Lynch, V. E.; Carreras, B. A.; Drake, J. B.; Leboeuf, J. N.; Liewer, P.

    1992-01-01

    One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2.

  6. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  7. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  8. A new numerical approach for uniquely solvable exterior Riemann-Hilbert problem on region with corners

    NASA Astrophysics Data System (ADS)

    Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira

    2014-06-01

    Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.

  9. Various contract settings and their impact on the cost of medical services.

    PubMed

    Magnezi, R; Dankner, R; Kedem, R; Reuveni, H

    2007-03-01

    This study analyzes the effect of outsourcing healthcare on career soldiers in the Israel Defense Forces (IDF) in different settings, so as to develop a model for predicting per capita medical costs Demographic information and data on healthcare utilization and costs were gathered from three computerized billing database systems: The IDF Medical Corps; a civilian hospital; and a healthcare fund, providing services to 3,746; 3,971; and 6,400 career soldiers, respectively. Visits to primary care physicians and specialists, laboratory and imaging exams, number of sick-leave days, and hospitalization days, were totaled for men and women separately for each type of clinic. A uniform cost was assigned to each type of treatment to create an average annual per capita cost for medical services of career soldiers. Significantly more visits were recorded to primary care physician and to specialists, as well as imaging examinations by Leumit Healthcare Services (LHS), than visits and tests in hospitals or in military clinics (p < 0.001). The number of referrals to emergency rooms and sick-leave days were lowest in the LHS as compared to the hospital and military clinics (p < 0.001). The medical cost per capita/year was lowest in LHS as well. Outsourcing primary care for career soldiers to a civilian healthcare fund represents a major cost effective change, lowest consumption and lower cost of medical care. Co-payment should be integrated into every agreement with the medical corps.

  10. Diagnostic value of JC/BK virus antibody immunohistochemistry staining in urine samples from posttransplant immunosuppressed patients in relation to polyomavirus reactivation.

    PubMed

    Yuste, Rosario Sanchez; Frías, Carolina; López, Ana; Vallejo, Carlos; Martín, Paloma; Bellas, Carmen

    2008-01-01

    To compare the diagnostic value of cytology and immunohistochemistry staining (IHS) of urine samples for polyomavirus reactivation diagnosis. Sixty-eight urine samples collected from 18 immunosuppressed patients were analyzed by Papanicolaou and IHS with a JC/BK virus-specific monoclonal antibody. Overall, polyomavirus BK (BKV) was positive in 11 of 18 patients (61.1%) (3 of whom developed hemorrhagic cystitis) and in 23 of 68 urine samples (28%). Of 23 samples, 4 (17%) were positive by 1 of the 2 techniques, only. Of 23 samples, 19 (83%) were positive by both methods. In matching urine samples from the same patient, the number of BKV-infected positive cells detected by IHS in urine slides was higher than those detected by Papanicolaou staining (71.3%). The main advantage of LHS is that it allowed confirmation of BKV infection diagnosis in urine samples. IHS detected more BKV-infected cells in samples with few positive urothelial cells, which would have gone undetected if only Papanicolaou staining had been used as the BKV screening method. Urine samples testing for BKV by both techniques will improve diagnosis in asymptomatic patients, allowing early therapeutic intervention and a better clinical outcome.

  11. A hypercube compact neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rostykus, P.L.; Somani, A.K.

    1988-09-01

    A major problem facing implementation of neural networks is the connection problem. One popular tradeoff is to remove connections. Random disconnection severely degrades the capabilities. The hypercube based Compact Neural Network (CNN) has structured architecture combined with a rearrangement of the memory vectors gives a larger input space and better degradation than a cost equivalent network with more connections. The CNNs are based on a Hopfield network. The changes from the Hopfield net include states of -1 and +1 and when a node was evaluated to 0, it was not biased either positive or negative, instead it resumed its previousmore » state. L = PEs, N = memories and t/sub ij/s is the weights between i and j.« less

  12. Mapping implicit spectral methods to distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea L.; Vanrosendale, John

    1991-01-01

    Spectral methods were proven invaluable in numerical simulation of PDEs (Partial Differential Equations), but the frequent global communication required raises a fundamental barrier to their use on highly parallel architectures. To explore this issue, a 3-D implicit spectral method was implemented on an Intel hypercube. Utilization of about 50 percent was achieved on a 32 node iPSC/860 hypercube, for a 64 x 64 x 64 Fourier-spectral grid; finer grids yield higher utilizations. Chebyshev-spectral grids are more problematic, since plane-relaxation based multigrid is required. However, by using a semicoarsening multigrid algorithm, and by relaxing all multigrid levels concurrently, relatively high utilizations were also achieved in this harder case.

  13. Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Reynolds, Reid G.; Liu, Frank X.; Lebsock, Kenneth L.

    2009-01-01

    Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum that the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical strategies for distributing a prescribed torque or momentum among the n wheels are presented, with special emphasis on configurations of four, five, and six wheels.

  14. Ordered fast Fourier transforms on a massively parallel hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Tong, Charles; Swarztrauber, Paul N.

    1991-01-01

    The present evaluation of alternative, massively parallel hypercube processor-applicable designs for ordered radix-2 decimation-in-frequency FFT algorithms gives attention to the reduction of computation time-dominating communication. A combination of the order and computational phases of the FFT is accordingly employed, in conjunction with sequence-to-processor maps which reduce communication. Two orderings, 'standard' and 'cyclic', in which the order of the transform is the same as that of the input sequence, can be implemented with ease on the Connection Machine (where orderings are determined by geometries and priorities. A parallel method for trigonometric coefficient computation is presented which does not employ trigonometric functions or interprocessor communication.

  15. Parallel machine architecture for production rule systems

    DOEpatents

    Allen, Jr., John D.; Butler, Philip L.

    1989-01-01

    A parallel processing system for production rule programs utilizes a host processor for storing production rule right hand sides (RHS) and a plurality of rule processors for storing left hand sides (LHS). The rule processors operate in parallel in the recognize phase of the system recognize -Act Cycle to match their respective LHS's against a stored list of working memory elements (WME) in order to find a self consistent set of WME's. The list of WME is dynamically varied during the Act phase of the system in which the host executes or fires rule RHS's for those rules for which a self-consistent set has been found by the rule processors. The host transmits instructions for creating or deleting working memory elements as dictated by the rule firings until the rule processors are unable to find any further self-consistent working memory element sets at which time the production rule system is halted.

  16. Mock Juror Perceptions of Rape Victims: Impact of Case Characteristics and Individual Differences.

    PubMed

    Sommer, Shannon; Reynolds, Joshua J; Kehn, Andre

    2016-10-01

    The purpose of the present study was to examine mock juror perceptions of rape victims based on the sex of the offender and victim (male offender/female victim vs. female offender/male victim), relationship to the offender (stranger vs. acquaintance vs. intimate partner), revictimization (no revictimization vs. revictimization), and individual differences in rape myth acceptance (RMA) and life history strategy (LHS). Participants (N = 332) read a vignette describing a forcible rape scenario and completed victim and perpetrator blame scales, the Mini-K, and a gender-neutral Rape Myth Acceptance Scale. Results indicated increased victim blame in revictimization conditions, as well as female offender/male victim conditions. A significant mediation effect of LHS on victim blame through the indirect effect of RMA was found, which is predicted from life history theory. Implications of these findings are discussed. © The Author(s) 2015.

  17. PC-CUBE: A Personal Computer Based Hypercube

    NASA Technical Reports Server (NTRS)

    Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry

    1988-01-01

    PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.

  18. Ordered fast fourier transforms on a massively parallel hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Tong, Charles; Swarztrauber, Paul N.

    1989-01-01

    Design alternatives for ordered Fast Fourier Transformation (FFT) algorithms were examined on massively parallel hypercube multiprocessors such as the Connection Machine. Particular emphasis is placed on reducing communication which is known to dominate the overall computing time. To this end, the order and computational phases of the FFT were combined, and the sequence to processor maps that reduce communication were used. The class of ordered transforms is expanded to include any FFT in which the order of the transform is the same as that of the input sequence. Two such orderings are examined, namely, standard-order and A-order which can be implemented with equal ease on the Connection Machine where orderings are determined by geometries and priorities. If the sequence has N = 2 exp r elements and the hypercube has P = 2 exp d processors, then a standard-order FFT can be implemented with d + r/2 + 1 parallel transmissions. An A-order sequence can be transformed with 2d - r/2 parallel transmissions which is r - d + 1 fewer than the standard order. A parallel method for computing the trigonometric coefficients is presented that does not use trigonometric functions or interprocessor communication. A performance of 0.9 GFLOPS was obtained for an A-order transform on the Connection Machine.

  19. Transcriptomic study to understand thermal adaptation in a high temperature-tolerant strain of Pyropia haitanensis

    PubMed Central

    Wang, Wenlei; Teng, Fei; Lin, Yinghui; Ji, Dehua; Xu, Yan; Chen, Changsheng

    2018-01-01

    Pyropia haitanensis, a high-yield commercial seaweed in China, is currently undergoing increasing levels of high-temperature stress due to gradual global warming. The mechanisms of plant responses to high temperature stress vary with not only plant type but also the degree and duration of high temperature. To understand the mechanism underlying thermal tolerance in P. haitanensis, gene expression and regulation in response to short- and long-term temperature stresses (SHS and LHS) was investigated by performing genome-wide high-throughput transcriptomic sequencing for a high temperature tolerant strain (HTT). A total of 14,164 differential expression genes were identified to be high temperature-responsive in at least one time point by high-temperature treatment, representing 41.10% of the total number of unigenes. The present data indicated a decrease in the photosynthetic and energy metabolic rates in HTT to reduce unnecessary energy consumption, which in turn facilitated in the rapid establishment of acclimatory homeostasis in its transcriptome during SHS. On the other hand, an increase in energy consumption and antioxidant substance activity was observed with LHS, which apparently facilitates in the development of resistance against severe oxidative stress. Meanwhile, ubiquitin-mediated proteolysis, brassinosteroids, and heat shock proteins also play a vital role in HTT. The effects of SHS and LHS on the mechanism of HTT to resist heat stress were relatively different. The findings may facilitate further studies on gene discovery and the molecular mechanisms underlying high-temperature tolerance in P. haitanensis, as well as allow improvement of breeding schemes for high temperature-tolerant macroalgae that can resist global warming. PMID:29694388

  20. Transcriptomic study to understand thermal adaptation in a high temperature-tolerant strain of Pyropia haitanensis.

    PubMed

    Wang, Wenlei; Teng, Fei; Lin, Yinghui; Ji, Dehua; Xu, Yan; Chen, Changsheng; Xie, Chaotian

    2018-01-01

    Pyropia haitanensis, a high-yield commercial seaweed in China, is currently undergoing increasing levels of high-temperature stress due to gradual global warming. The mechanisms of plant responses to high temperature stress vary with not only plant type but also the degree and duration of high temperature. To understand the mechanism underlying thermal tolerance in P. haitanensis, gene expression and regulation in response to short- and long-term temperature stresses (SHS and LHS) was investigated by performing genome-wide high-throughput transcriptomic sequencing for a high temperature tolerant strain (HTT). A total of 14,164 differential expression genes were identified to be high temperature-responsive in at least one time point by high-temperature treatment, representing 41.10% of the total number of unigenes. The present data indicated a decrease in the photosynthetic and energy metabolic rates in HTT to reduce unnecessary energy consumption, which in turn facilitated in the rapid establishment of acclimatory homeostasis in its transcriptome during SHS. On the other hand, an increase in energy consumption and antioxidant substance activity was observed with LHS, which apparently facilitates in the development of resistance against severe oxidative stress. Meanwhile, ubiquitin-mediated proteolysis, brassinosteroids, and heat shock proteins also play a vital role in HTT. The effects of SHS and LHS on the mechanism of HTT to resist heat stress were relatively different. The findings may facilitate further studies on gene discovery and the molecular mechanisms underlying high-temperature tolerance in P. haitanensis, as well as allow improvement of breeding schemes for high temperature-tolerant macroalgae that can resist global warming.

  1. Monitoring of livestock grazing effects on Bureau of Land Management land

    USGS Publications Warehouse

    Veblen, Kari E.; Pyke, David A.; Aldridge, Cameron L.; Casazza, Michael L.; Assal, Timothy J.; Farinha, Melissa A.

    2013-01-01

    Public land management agencies, such as the Bureau of Land Management (BLM), are charged with managing rangelands throughout the western United States for multiple uses, such as livestock grazing and conservation of sensitive species and their habitats. Monitoring of condition and trends of these rangelands, particularly with respect to effects of livestock grazing, provides critical information for effective management of these multiuse landscapes. We therefore investigated the availability of livestock grazing-related quantitative monitoring data and qualitative region-specific Land Health Standards (LHS) data across BLM grazing allotments in the western United States. We then queried university and federal rangeland science experts about how best to prioritize rangeland monitoring activities. We found that the most commonly available monitoring data were permittee-reported livestock numbers and season-of-use data (71% of allotments) followed by repeat photo points (58%), estimates of forage utilization (52%), and, finally, quantitative vegetation measurements (37%). Of the 57% of allotments in which LHS had been evaluated as of 2007, the BLM indicated 15% had failed to meet LHS due to livestock grazing. A full complement of all types of monitoring data, however, existed for only 27% of those 15%. Our data inspections, as well as conversations with rangeland experts, indicated a need for greater emphasis on collection of grazing-related monitoring data, particularly ground cover. Prioritization of where monitoring activities should be focused, along with creation of regional monitoring teams, may help improve monitoring. Overall, increased emphasis on monitoring of BLM rangelands will require commitment at multiple institutional levels.

  2. Living high training low induces physiological cardiac hypertrophy accompanied by down-regulation and redistribution of the renin-angiotensin system

    PubMed Central

    Shi, Wei; Meszaros, J Gary; Zeng, Shao-ju; Sun, Ying-yu; Zuo, Ming-xue

    2013-01-01

    Aim: Living high training low” (LHTL) is an exercise-training protocol that refers living in hypoxia stress and training at normal level of O2. In this study, we investigated whether LHTL caused physiological heart hypertrophy accompanied by changes of biomarkers in renin-angiotensin system in rats. Methods: Adult male SD rats were randomly assigned into 4 groups, and trained on living low-sedentary (LLS, control), living low-training low (LLTL), living high-sedentary (LHS) and living high-training low (LHTL) protocols, respectively, for 4 weeks. Hematological parameters, hemodynamic measurement, heart hypertrophy and plasma angiotensin II (Ang II) level of the rats were measured. The gene and protein expression of angiotensin-converting enzyme (ACE), angiotensinogen (AGT) and angiotensin II receptor I (AT1) in heart tissue was assessed using RT-PCR and immunohistochemistry, respectively. Results: LLTL, LHS and LHTL significantly improved cardiac function, increased hemoglobin concentration and RBC. At the molecular level, LLTL, LHS and LHTL significantly decreased the expression of ACE, AGT and AT1 genes, but increased the expression of ACE and AT1 proteins in heart tissue. Moreover, ACE and AT1 protein expression was significantly increased in the endocardium, but unchanged in the epicardium. Conclusion: LHTL training protocol suppresses ACE, AGT and AT1 gene expression in heart tissue, but increases ACE and AT1 protein expression specifically in the endocardium, suggesting that the physiological heart hypertrophy induced by LHTL is regulated by region-specific expression of renin-angiotensin system components. PMID:23377552

  3. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  4. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  5. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, T.; Atac, R.; Cook, A.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less

  6. Microgravity-Driven Optic Nerve/Sheath Biomechanics Simulations

    NASA Technical Reports Server (NTRS)

    Ethier, C. R.; Feola, A.; Myers, J. G.; Nelson, E.; Raykin, J.; Samuels, B.

    2016-01-01

    Visual Impairment and Intracranial Pressure (VIIP) syndrome is a concern for long-duration space flight. Current thinking suggests that the ocular changes observed in VIIP syndrome are related to cephalad fluid shifts resulting in altered fluid pressures [1]. In particular, we hypothesize that increased intracranial pressure (ICP) drives connective tissue remodeling of the posterior eye and optic nerve sheath (ONS). We describe here finite element (FE) modeling designed to understand how altered pressures, particularly altered ICP, affect the tissues of the posterior eye and optic nerve sheath (ONS) in VIIP. METHODS: Additional description of the modeling methodology is provided in the companion IWS abstract by Feola et al. In brief, a geometric model of the posterior eye and optic nerve, including the ONS, was created and the effects of fluid pressures on tissue deformations were simulated. We considered three ICP scenarios: an elevated ICP assumed to occur in chronic microgravity, and ICP in the upright and supine positions on earth. Within each scenario we used Latin hypercube sampling (LHS) to consider a range of ICPs, ONH tissue mechanical properties, intraocular pressures (IOPs) and mean arterial pressures (MAPs). The outcome measures were biomechanical strains in the lamina cribrosa, optic nerve and retina; here we focus on peak values of these strains, since elevated strain alters cell phenotype and induce tissue remodeling. In 3D, the strain field can be decomposed into three orthogonal components, denoted as first, second and third principal strains. RESULTS AND CONCLUSIONS: For baseline material properties, increasing ICP from 0 to 20 mmHg significantly changed strains within the posterior eye and ONS (Fig. 1), indicating that elevated ICP affects ocular tissue biomechanics. Notably, strains in the lamina cribrosa and retina became less extreme as ICP increased; however, within the optic nerve, the occurrence of such extreme strains greatly increased as ICP was elevated (Fig. 2). In particular, c. 48 of simulations in the elevated ICP condition showed peak strains in the optic nerve that exceeded the strains expected on earth. Such extreme strains are likely important, since they represent a larger signal for mechano-responsive resident cells [2]. The models predicted little to no anterior motion of the prelaminar neural tissue (optic nerve swelling, or papilledema, secondary to axoplasmic stasis), typically seen with elevated ICP. Specialized FE models to capture axoplasmic stasis would be required to study papilledema. These results suggest that the most notable effect of elevated ICP may occur via direct optic nerve loading, rather than through connective tissue deformation. These FE models can inform the design of future studies designed to bridge the gap between biomechanics and pathophysiological function in VIIP.

  7. Polyhedra and Higher Dimensions.

    ERIC Educational Resources Information Center

    Scott, Paul

    1988-01-01

    Describes the definition and characteristics of a regular polyhedron, tessellation, and pseudopolyhedra with diagrams. Discusses the nature of simplex, hypercube, and cross-polytope in the fourth dimension and beyond. (YP)

  8. Estimation of the discharges of the multiple water level stations by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi

    2016-04-01

    This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.

  9. Data resources for range-wide assessment of livestock grazing across the sagebrush biome

    USGS Publications Warehouse

    Assal, T.J.; Veblen, K.E.; Farinha, M.A.; Aldridge, Cameron L.; Casazza, Michael L.; Pyke, D.A.

    2012-01-01

    The data contained in this series were compiled, modified, and analyzed for the U.S. Geological Survey (USGS) report "Range-Wide Assessment of Livestock Grazing Across the Sagebrush Biome." This report can be accessed through the USGS Publications Warehouse (online linkage: http://pubs.usgs.gov/of/2011/1263/). The dataset contains spatial and tabular data related to Bureau of Land Management (BLM) Grazing Allotments. We reviewed the BLM national grazing allotment spatial dataset available from the GeoCommunicator National Integrated Land System (NILS) website in 2007 (http://www.geocommunicator.gov). We identified several limitations in those data and learned that some BLM State and/or field offices had updated their spatial data to rectify these limitations, but maintained the data outside of NILS. We contacted appropriate BLM offices (State or field, 25 in all) to obtain the most recent data, assessed the data, established a data development protocol, and compiled data into a topologically enforced dataset throughout the area of interest for this project (that is, the pre-settlement distribution of Greater Sage-Grouse in the Western United States). The final database includes three spatial datasets: Allotments (BLM Grazing Allotments), OUT_Polygons (nonallotment polygons used to ensure topology), and Duplicate_Polygon_Allotments. See Appendix 1 of the aforementioned report for complete methods. The tabular data presented here consists of information synthesized by the Land Health Standard (LHS) analysis (Appendix 2), and data obtained from the BLM Rangeland Administration System (http://www.blm.gov/ras/). In 2008, available LHS data for all allotments in all regions were compiled by BLM in response to a Freedom of Information Act (FOIA) request made by a private organization. The BLM provided us with a copy of these data. These data provided three major types of information that were of interest: (1) date(s) (if any) of the most recent LHS evaluation for each allotment; (2) whether if evaluated, each region-specific standard (3–8 LHS depending on region) had been met on a given allotment; and (3) whether livestock contributed to any of these standards not being met. A description of how we processed the original data to prepare for analysis is described in Appendix 2, and the synthesized dataset can be found in the table "lhs_x_walk." Permitted use dates, livestock type (horse, sheep or cattle), number of livestock, and Animal Unit Months [the number of animal units (1,000-pound animal equivalents) that can be grazed for 31 days with the available forage in a sustainable manner] are the legal maximum grazing amounts for a given allotment, and legal adjustments to these numbers occur infrequently. We summarized permitted use by BLM allotment in the table "Permitted_Use." Billed use records are used for calculations of permittees' annual grazing bills. We summarized billed use by allotment for BLM grazing year in the table "Billed_Use." All three tables can be joined with the allotment spatial data in a geographic information system (GIS) environment, using the IDENT attribute as the primary key.

  10. Seasonal neuronal plasticity in song-control and auditory forebrain areas in subtropical nonmigratory and palearctic-indian migratory male songbirds.

    PubMed

    Surbhi; Rastogi, Ashutosh; Malik, Shalie; Rani, Sangeeta; Kumar, Vinod

    2016-10-01

    This study examines whether differences in annual life-history states (LHSs) among the inhabitants of two latitudes would have an impact on the neuronal plasticity of the song-control system in songbirds. At the times of equinoxes and solstices during the year (n = 4 per year) corresponding to different LHSs, we measured the volumetric changes and expression of doublecortin (DCX; an endogenous marker of the neuronal recruitment) in the song-control nuclei and higher order auditory forebrain regions of the subtropical resident Indian weaverbirds (Ploceus philippinus) and Palearctic-Indian migratory redheaded buntings (Emberiza bruniceps). Area X in basal ganglia, lateral magnocellular nucleus of the anterior nidopallium (LMAN), HVC (proper name), and robust nucleus of the arcopallium (RA) were enlarged during the breeding LHS. Both round and fusiform DCX-immunoreactive (DCX-ir) cells were found in area X and HVC but not in LMAN or RA, with a significant seasonal difference. Also, as shown by increase in volume and by dense, round DCX-ir cells, the neuronal incorporation was increased in HVC alone during the breeding LHS. This suggests differences in the response of song-control nuclei to photoperiod-induced changes in LHSs. Furthermore, DCX immunoreactivity indicated participation of the cortical caudomedial nidopallium and caudomedial mesopallium in the song-control system, albeit with differences between the weaverbirds and the buntings. Overall, these results show seasonal neuronal plasticity in the song-control system closely associated with annual reproductive LHS in both of the songbirds. Differences between species probably account for the differences in the photoperiod-response system between the relative refractory weaverbirds and absolute refractory redheaded buntings. J. Comp. Neurol. 524:2914-2929, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. How can the curation of hands-on STEM activities power successful mobile apps and websites?

    NASA Astrophysics Data System (ADS)

    Porcello, D.; Peticolas, L. M.; Schwerin, T. G.

    2015-12-01

    The Lawrence Hall of Science (LHS) is University of California, Berkeley's public science center. Over the last decade, the Center for Technology Innovation at LHS has partnered with many institutions to establish a strong track record of developing successful technology solutions to support STEM teaching and learning within informal environments. Curation by subject-matter experts has been at the heart of many educational technology products from LHS and its partners that are directed at educators and families. This work includes: (1) popular digital libraries for inquiry-based activities at Howtosmile.org (NSF DRL #0735007) and NASA Earth and Space science education resources at NASAwavelength.org; and novel mobile apps like DIY Sun Science (NASA NNX10AE05G) and DIY Human Body (NIH 5R25OD010543) designed to scaffold exploration of STEM phenomena at home. Both NASA Wavelength and DIY Sun Science arose out of long-term collaborations with the Space Sciences Laboratory at UC Berkeley, Institute for Global Environmental Strategies (IGES), and other NASA-funded organizations, in partnership with NASA through cooperative agreements. This session will review the development, formative evaluation, and usage metrics for these two Earth and Space science-themed educational technology products directly relevant to the AGU community. Questions reviewed by presenters will include: What makes a good hands-on activity, and what essential information do educators depend on when searching for programming additions? What content and connections do families need to explore hands-on activities? How can technology help incorporate educational standards into the discovery process for learning experiences online? How do all these components drive the design and user experience of websites and apps that showcase STEM content?

  12. Comparative effects of sub-stimulating concentrations of non-human versus human Luteinizing Hormones (LH) or chorionic gonadotropins (CG) on adenylate cyclase activation by forskolin in MLTC cells.

    PubMed

    Nguyen, Thi-Mong Diep; Filliatreau, Laura; Klett, Danièle; Combarnous, Yves

    2018-05-15

    We have compared various Luteinizing Hormone (LH) and Chorionic Gonadotropin (CG) preparations from non-human and human species in their ability to synergize with 10 µM forskolin (FSK) for cyclic AMP intracellular accumulation, in MLTC cells. LH from rat pituitary as well as various isoforms of pituitary ovine, bovine, porcine, equine and human LHs and equine and human CG were studied. In addition, recombinant human LH and CG were also compared with the natural human and non-human hormones. Sub-stimulating concentrations of all LHs and CGs (2-100 pM) were found to stimulate cyclic AMP accumulation in MLTC cells in the presence of an also non-stimulating FSK concentration (10 µM). Like rat LH, the most homologous available hormone for mouse MLTC cells, all non-human LHs and CG exhibit a strong potentiating effect on FSK response. The human, natural and recombinant hLH and hCG also do so but in addition, they were found to elicit a permissive effect on FSK stimulation. Indeed, when incubated alone with MLTC cells at non-stimulating concentrations (2-70 pM) hLH and hCG permit, after being removed, a dose-dependent cyclic AMP accumulation with 10 µM FSK. Our data show a clearcut difference between human LH and CG compared to their non-human counterparts on MLTC cells adenylate cyclase activity control. This points out the risk of using hCG as a reference ligand for LHR in studies using non-human cells. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Surface Plasmon Polariton-Assisted Long-Range Exciton Transport in Monolayer Semiconductor Lateral Heterostructure

    NASA Astrophysics Data System (ADS)

    Shi, Jinwei; Lin, Meng-Hsien; Chen, Yi-Tong; Estakhri, Nasim Mohammadi; Tseng, Guo-Wei; Wang, Yanrong; Chen, Hung-Ying; Chen, Chun-An; Shih, Chih-Kang; Alã¹, Andrea; Li, Xiaoqin; Lee, Yi-Hsien; Gwo, Shangjr

    Recently, two-dimensional (2D) semiconductor heterostructures, i.e., atomically thin lateral heterostructures (LHSs) based on transition metal dichalcogenides (TMDs) have been demonstrated. In an optically excited LHS, exciton transport is typically limited to a rather short spatial range ( 1 micron). Furthermore, additional losses may occur at the lateral interfacial regions. Here, to overcome these challenges, we experimentally implement a planar metal-oxide-semiconductor (MOS) structure by placing a monolayer of WS2/MoS2 LHS on top of an Al2O3 capped Ag single-crystalline plate. We found that the exciton transport range can be extended to tens of microns. The process of long-range exciton transport in the MOS structure is confirmed to be mediated by an exciton-surface plasmon polariton-exciton conversion mechanism, which allows a cascaded energy transfer process. Thus, the planar MOS structure provides a platform seamlessly combining 2D light-emitting materials with plasmonic planar waveguides, offering great potential for developing integrated photonic/plasmonic functionalities.

  14. Spectroscopic observations of cool degenerate star candidates

    NASA Technical Reports Server (NTRS)

    Hintzen, P.

    1986-01-01

    Spectroscopic observations are reported for 23 Luyten Half-Second degenerate star candidates and for 13 Luyten-Palomar common proper-motion pairs containing possible degenerate star components. Twenty-five degenerate stars are identified, 20 of which lack previous spectroscopy. Most of these stars are cool - Luyten color class g or later. One star, LP 77-57, shows broad continuum depressions similar to those in LHS 1126, which Liebert and Dahn attributed to pressure-shifted C2. A second degenerate star, LHS 290, exhibits apparent strong Swan bands which are blueshifted about 75 A. Further observations, including polarimetry and photometry, are required to appraise the spectroscopic peculiarities of these stars. Finally, five cool, sharp-lined DA white dwarfs have been observed to detect lines of metals and to determine line strengths. None of these DAs show signs of Mg b or the G band, and four show no evidence of Ca II K. The attempt to detect Ca MI in the fifth star, G199-71, was inconclusive.

  15. Back-thrusting in Lesser Himalaya: Evidences from magnetic fabric studies in parts of Almora crystalline zone, Kumaun Lesser Himalaya

    NASA Astrophysics Data System (ADS)

    Agarwal, Amar; Agarwal, K. K.; Bali, R.; Prakash, Chandra; Joshi, Gaurav

    2016-06-01

    The present study aims to understand evolution of the Lesser Himalaya, which consists of (meta) sedimentary and crystalline rocks. Field studies, microscopic and rock magnetic investigations have been carried out on the rocks near the South Almora Thrust (SAT) and the North Almora Thrust (NAT), which separates the Almora Crystalline Zone (ACZ) from the Lesser Himalayan sequences (LHS). The results show that along the South Almora Thrust, the deformation is persistent; however, near the NAT deformation pattern is complex and implies overprinting of original shear sense by a younger deformational event. We attribute this overprinting to late stage back-thrusting along NAT, active after the emplacement of ACZ. During this late stage back-thrusting, rocks of the ACZ and LHS were coupled. Back-thrusts originated below the Lesser Himalayan rocks, probably from the Main Boundary Thrust, and propagated across the sedimentary and crystalline rocks. This study provides new results from multiple investigations, and enhances our understanding of the evolution of the ACZ.

  16. Monte Carlo Study of Four-Dimensional Self-avoiding Walks of up to One Billion Steps

    NASA Astrophysics Data System (ADS)

    Clisby, Nathan

    2018-04-01

    We study self-avoiding walks on the four-dimensional hypercubic lattice via Monte Carlo simulations of walks with up to one billion steps. We study the expected logarithmic corrections to scaling, and find convincing evidence in support the scaling form predicted by the renormalization group, with an estimate for the power of the logarithmic factor of 0.2516(14), which is consistent with the predicted value of 1/4. We also characterize the behaviour of the pivot algorithm for sampling four dimensional self-avoiding walks, and conjecture that the probability of a pivot move being successful for an N-step walk is O([ log N ]^{-1/4}).

  17. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  18. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  19. Approximation of reliability of direct genomic breeding values

    USDA-ARS?s Scientific Manuscript database

    Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...

  20. Eggs Eggs Everywhere. Teacher's Guide. Preschool-1. LHS GEMS.

    ERIC Educational Resources Information Center

    Echols, Jean C.; Hosoume, Kimi; Kopp, Jaine

    This book supports the National Science Education Standards by giving children an understanding of the characteristics of organisms, outlining the life cycles of organisms, and showing how organisms relate to their environments. Interweaving life science with literature, mathematics, and physical sciences, the unit begins with children…

  1. Genetic code, hamming distance and stochastic matrices.

    PubMed

    He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E

    2004-09-01

    In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, J.; Derrida, B.

    The problem of directed polymers on disordered hierarchical and hypercubic lattices is considered. For the hierarchical lattices the problem can be reduced to the study of the stable laws for combining random variables in a nonlinear way. The authors present the results of numerical simulations of two hierarchical lattices, finding evidence of a phase transition in one case. For a limiting case they extend the perturbation theory developed by Derrida and Griffiths to nonzero temperature and to higher order and use this approach to calculate thermal and geometrical properties (overlaps) of the model. In this limit they obtain an interpolationmore » formula, allowing one to obtain the noninteger moments of the partition function from the integer moments. They obtain bounds for the transition temperature for hierarchical and hypercubic lattices, and some similarities between the problem on the two different types of lattice are discussed.« less

  3. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  4. Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigeti, David Edward; Williams, Brian J.; Parsons, D. Kent

    2016-10-18

    Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances domore » not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.« less

  5. Statistics of lattice animals

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter

    2005-07-01

    The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.

  6. In Silico Oncology: Quantification of the In Vivo Antitumor Efficacy of Cisplatin-Based Doublet Therapy in Non-Small Cell Lung Cancer (NSCLC) through a Multiscale Mechanistic Model

    PubMed Central

    Kolokotroni, Eleni; Dionysiou, Dimitra; Veith, Christian; Kim, Yoo-Jin; Franz, Astrid; Grgic, Aleksandar; Bohle, Rainer M.; Stamatakos, Georgios

    2016-01-01

    The 5-year survival of non-small cell lung cancer patients can be as low as 1% in advanced stages. For patients with resectable disease, the successful choice of preoperative chemotherapy is critical to eliminate micrometastasis and improve operability. In silico experimentations can suggest the optimal treatment protocol for each patient based on their own multiscale data. A determinant for reliable predictions is the a priori estimation of the drugs’ cytotoxic efficacy on cancer cells for a given treatment. In the present work a mechanistic model of cancer response to treatment is applied for the estimation of a plausible value range of the cell killing efficacy of various cisplatin-based doublet regimens. Among others, the model incorporates the cancer related mechanism of uncontrolled proliferation, population heterogeneity, hypoxia and treatment resistance. The methodology is based on the provision of tumor volumetric data at two time points, before and after or during treatment. It takes into account the effect of tumor microenvironment and cell repopulation on treatment outcome. A thorough sensitivity analysis based on one-factor-at-a-time and latin hypercube sampling/partial rank correlation coefficient approaches has established the volume growth rate and the growth fraction at diagnosis as key features for more accurate estimates. The methodology is applied on the retrospective data of thirteen patients with non-small cell lung cancer who received cisplatin in combination with gemcitabine, vinorelbine or docetaxel in the neoadjuvant context. The selection of model input values has been guided by a comprehensive literature survey on cancer-specific proliferation kinetics. The latin hypercube sampling has been recruited to compensate for patient-specific uncertainties. Concluding, the present work provides a quantitative framework for the estimation of the in-vivo cell-killing ability of various chemotherapies. Correlation studies of such estimates with the molecular profile of patients could serve as a basis for reliable personalized predictions. PMID:27657742

  7. Communicating Ocean Sciences to Informal Audiences (COSIA): Interim Evaluation Report

    ERIC Educational Resources Information Center

    St. John, Mark; Phillips, Michelle; Smith, Anita; Castori, Pam

    2009-01-01

    Communicating Ocean Sciences to Informal Audiences (COSIA) is a National Science Foundation (NSF)-funded project consisting of seven long-term three-way partnerships between the Lawrence Hall of Science (LHS) and an informal science education institution (ISEI) partnered with an institution of higher education (IHE). Together, educators from the…

  8. Communicating Ocean Sciences to Informal Audiences (COSIA): Final Evaluation Report

    ERIC Educational Resources Information Center

    Phillips, Michelle; St. John, Mark

    2010-01-01

    Communicating Ocean Sciences to Informal Audiences (COSIA) is a National Science Foundation (NSF)-funded project consisting of six three-way partnerships between the Lawrence Hall of Science (LHS) and an informal science education institution (ISEI) partnered with an institution of higher education (IHE). Together, educators from the ISEI (often…

  9. Acid Rain. Teacher's Guide. LHS GEMS.

    ERIC Educational Resources Information Center

    Hocking, Colin; Barber, Jacqueline; Coonrod, Jan

    This teacher's guide presents a unit on acid rain and introduces hands-on activities for sixth through eighth grade students. In each unit, students act as real scientists and gather evidence by using science process skills such as observing, measuring and recording data, classifying, role playing, problem solving, critical thinking, synthesizing…

  10. Experiments with Internal Television (ITV-LHS).

    ERIC Educational Resources Information Center

    Naeslund, Jon

    Experiments with internal television (ITV) at the Stockholm School of Education are reviewed in this newsletter. The purposes of the experiments have been (1) to study the effects of ITV illustrations of theoretical instruction in education and methodology, (2) to investigate the possibilities of conveying information in teacher training via ITV,…

  11. Knowing our neighbors: Fundamental properties of nearby stars

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer Lynn

    The stars within 25 parsecs (pc) of our Sun constitute the one stellar sample that we aspire to know thoroughly, but we still have not even identified all of the stars within 10 pc. We have still less knowledge of the nearby substellar population, especially the planets. The four studies described herein expand our knowledge of the solar neighborhood. First, a re-analysis of the Leander McCormick Observatory photographic plates of Barnard's Star failed to detect any planets orbiting it, and this study would have detected planets with 2.2 Jupiter masses or greater. In addition, its parallax, proper motion, and secular acceleration were measured with results comparable with those from more modern equipment. Second, increased information about nearby planets was sought through time series analyses of astrometric residuals to stars observed by the University of Virginia Southern Parallax Program. Of these, LHS 288 displays an intriguing signal, which might be caused by a very low mass companion. Twelve other stars demonstrate no astrometric perturbations. While astrometry could reveal the presence of unseen companions, distances from trigonometric parallaxes define the solar neighborhood and identify its inhabitants. Preliminary parallaxes for 43 potential nearby stars being observed by the Cerro Tololo Inter-American Observatory Parallax Investigation (CTIOPI) confirmed 28 stars as being within 25 pc, including three stars---LP 991-84, LHS 6167, and LP 876-10---that probably lie within 10 pc. Three more stars lie near the 25-pc boundary and their final parallaxes may qualify them as nearby. One recently established neighbor, LP 869-26, is a potential binary. For many stars in this third sample, preliminary photometry ( V, R, and I bands), spectroscopy, and proper motions are also available. Despite the continuing importance of ground-based parallax measurements, few active programs remain. The final project tested the recently installed infrared camera on the 31-inch (0.8-meter) telescope at Fan Mountain Observatory for astrometric stability. A parallax program would be feasible there and could provide much needed distances for brown dwarfs and very low mass stars. Through this and similar efforts, we are establishing the foundations for understanding our Milky Way Galaxy, including its component stars and populations.

  12. High-Accuracy Finite Element Method: Benchmark Calculations

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.

  13. Performance Evaluation of Parallel Branch and Bound Search with the Intel iPSC (Intel Personal SuperComputer) Hypercube Computer.

    DTIC Science & Technology

    1986-12-01

    17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design

  14. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  15. Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Taylor, Arthur C., III

    1994-01-01

    This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.

  16. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  17. Plate Tectonics: The Way the Earth Works. Teacher's Guide. LHS GEMS.

    ERIC Educational Resources Information Center

    Cuff, Kevin

    This teacher guide presents a unit on plate tectonics and introduces hands-on activities for students in grades 6-8. In each unit, students act as real scientists and gather evidence by using science process skills such as observing, graphing, analyzing data, designing and making models, visualizing, communicating, theorizing, and drawing…

  18. Stories In Stone: Teacher's Guide. Grades 4-9. LHS GEMS.

    ERIC Educational Resources Information Center

    Cuff, Kevin; And Others

    While rocks and minerals are often regarded as among the most static and solid objects, the body of knowledge of which they are part is always changing. This teachers guide contains activities and experiments designed to enhance students understanding of geology and petrology. By examining actual specimens of the Earth's crust, students learn…

  19. Do arbuscular mycorrhizal fungi with contrasting life-history strategies differ in their responses to repeated defoliation?

    PubMed

    Ijdo, Marleen; Schtickzelle, Nicolas; Cranenbrouck, Sylvie; Declerck, Stéphane

    2010-04-01

    Arbuscular mycorrhizal (AM) fungi obligatorily depend on carbon (C) resources provided via the plant and therefore fluctuations in C availability may strongly and differently affect AM fungi with different life-history strategies (LHS). In the present study, we examined the effect of repeated defoliation of in vitro grown barrel medic (Medicago truncatula) on the spore and auxiliary cell (AC) production dynamics of a presumed r-strategist (Glomus intraradices) and a presumed K-strategist (Dentiscutata reticulata). Glomus intraradices modulated the production of spores directly to C availability, showing direct investment in reproduction as expected for r-strategists. In contrast, AC production of D. reticulata was not affected after a single defoliation and thus showed higher resistance to fluctuating C levels, as expected for K-strategists. Our results demonstrate that plant defoliation affects the production of extraradical C storage structures of G. intraradices and D. reticulata differently. Our results contribute towards revealing differences in LHS among AM fungal species, a step further towards understanding their community dynamics in natural ecosystems and agroenvironments.

  20. Effect of familial sinistrality on planum temporale surface and brain tissue asymmetries.

    PubMed

    Tzourio-Mazoyer, Nathalie; Simon, Gregory; Crivello, Fabrice; Jobard, Gael; Zago, Laure; Perchey, Guy; Hervé, Pierre-Yves; Joliot, Marc; Petit, Laurent; Mellet, Emmanuel; Mazoyer, Bernard

    2010-06-01

    The impact of having left-handers (LHs) among one's close relatives, called familial sinistrality (FS), on neuroanatomical markers of left-hemisphere language specialization was studied in 274 normal adults, including 199 men and 75 women, among whom 77 men and 27 women were positive for FS. Measurements of the surface of a phonological cortical area, the "planum temporale" (PT), and gray and white matter hemispheric volumes and asymmetries were made using brain magnetic resonance images. The size of the left PT of subjects with left-handed close relatives (FS+) was reduced by 10%, decreasing with the number of left-handed relatives, and lowest when the subject's mother was left-handed. Such findings had no counterparts in the right hemisphere, and the subject's handedness and sex were found to have no significant effect or interaction with FS on the left PT size. The FS+ subjects also exhibited increased gray matter volume, reduced hemispheric gray matter leftward asymmetry, and, in LHs, reduced strength of hand preference. These results add to the increasing body of evidence suggesting multiple and somewhat independent mechanisms for the inheritance of hand and language lateralization.

  1. Chandra Observations of Magnetic White Dwarfs and their Theoretical Implications

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.; Noble, M.; Porter, J. G.; Winget, D. E.

    2003-01-01

    Observations of cool DA and DB white dwarfs have not yet been successful in detecting coronal X-ray emission, but observations of late-type dwarfs and giants show that coronae are common for these stars. To produce coronal X-rays, a star must have dynamo-generated surface magnetic fields and a well-developed convection zone. There is some observational evidence that the DA star LHS 1038 and the DB star GD 358 have weak and variable surface magnetic fields. It has been suggested that such fields can be generated by dynamo action, and since both stars have well-developed convection zones, theory predicts detectable levels of coronal X-rays from these white dwarfs. However, we present analysis of Chandra observations of both stars showing no detectable X-ray emission. The derived upper limits for the X-ray fluxes provide strong constraints on theories of formation of coronae around magnetic white dwarfs. Another important implication of our negative Chandra observations is the possibility that the magnetic fields of LHS 1038 and GD 358 are fossil fields.

  2. A new nitrilase-producing strain named Rhodobacter sphaeroides LHS-305: biocatalytic characterization and substrate specificity.

    PubMed

    Yang, Chunsheng; Wang, Xuedong; Wei, Dongzhi

    2011-12-01

    The characteristics of the new nitrilase-producing strain Rhodobacter sphaeroides LHS-305 were investigated. By investigating several parameters influencing nitrilase production, the specific cell activity was ultimately increased from 24.5 to 75.0 μmol g(-1) min(-1), and hereinto, the choice of inducer proved the most important factor. The aromatic nitriles (such as 3-cyanopyridine and benzonitrile) were found to be the most favorable substrates of the nitrilase by analyzing the substrate spectrum. It was speculated that the unsaturated carbon atom attached to the cyano group was crucial for this type of nitrilase. The value of apparent K (m), substrate inhibition constant, and product inhibition constant of the nitrilase against 3-cyanopyridine were 4.5 × 10(-2), 29.2, and 8.6 × 10(-3) mol L(-1), respectively. When applied in nicotinic acid preparation, the nitrilase is able to hydrolyze 200 mmol L(-1) 3-cyanopyridine with 93% conversion rate in 13 h by 6.1 g L(-1) cells (dry cell weight).

  3. A framework for building hypercubes using MapReduce

    NASA Astrophysics Data System (ADS)

    Tapiador, D.; O'Mullane, W.; Brown, A. G. A.; Luri, X.; Huedo, E.; Osuna, P.

    2014-05-01

    The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalog will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigm but without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

  4. Disability in Activities of Daily Living and Severity of Dyskinesias Determine the Handicap of Parkinson's Disease Patients in Advanced Stage Selected to DBS.

    PubMed

    Coelho, Miguel; Abreu, Daisy; Correia-Guedes, Leonor; Lobo, Patricia Pita; Fabbri, Margherita; Godinho, Catarina; Domingos, Josefa; Albuquerque, Luisa; Freitas, Vanda; Pereira, João Miguel; Cattoni, Begona; Carvalho, Herculano; Reimão, Sofia; Rosa, Mário M; Ferreira, António Gonalves; Ferreira, Joaquim J

    2017-01-01

    There is scarce data on the level of handicap in Parkinson's disease (PD) and none in advanced stage PD. To assess the handicap in advanced stage PD patients with disabling levodopa-induced motor complications selected to deep brain stimulation (DBS). Data was prospectively recorded during routine evaluation for DBS. Handicap was measured using London Handicap Scale (LHS) (0 = maximal handicap; 1 = no handicap). Disease severity was evaluated using the Hoehn & Yahr scale and the UPDRS/MDS-UPDRS, during off and on after a supra-maximal dose of levodopa. Schwab and England Scale (S&E) was scored in off and on. Dyskinesias were scored using the modified Abnormal Involuntary Movement Scale (mAIMS). Results concern cross-sectional assessment before DBS. 100 PD patients (mean age 61 (±7.6); mean disease duration 12.20 (±4.6) years) were included. Median score of motor MDS-UPDRS was 54 in off and 25 in on. Mean total LHS score was 0.56 (±0.14). Patients were handicapped in several domains with a wide range of severity. Physical Independence and Social Integration were the most affected domains. Determinants of total LHS score were MDS-UPDRS part II off (β= -0.271; p = 0.020), S&E on (β= 0.264; p = 0.005) and off (β= 0.226; p = 0.020), and mAIMS on (β= -0.183; p = 0.042) scores (R2  = 29.6%). We were able to use handicap to measure overall health condition in advanced stage PD. Patients were moderately to highly handicapped and this was strongly determined by disability in ADL and dyskinesias. Change in handicap may be a good patient-centred outcome to assess efficiency of DBS.

  5. Inverted temperature sequences: role of deformation partitioning

    NASA Astrophysics Data System (ADS)

    Grujic, D.; Ashley, K. T.; Coble, M. A.; Coutand, I.; Kellett, D.; Whynot, N.

    2015-12-01

    The inverted metamorphism associated with the Main Central thrust zone in the Himalaya has been historically attributed to a number of tectonic processes. Here we show that there is actually a composite peak and deformation temperature sequence that formed in succession via different tectonic processes. The deformation partitioning seems to the have played a key role, and the magnitude of each process has varied along strike of the orogen. To explain the formation of the inverted metamorphic sequence across the Lesser Himalayan Sequence (LHS) in eastern Bhutan, we used Raman spectroscopy of carbonaceous material (RSCM) to determine the peak metamorphic temperatures and Ti-in-quartz thermobarometry to determine the deformation temperatures combined with thermochronology including published apatite and zircon U-Th/He and fission-track data and new 40Ar/39Ar dating of muscovite. The dataset was inverted using 3D-thermal-kinematic modeling to constrain the ranges of geological parameters such as fault geometry and slip rates, location and rates of localized basal accretion, and thermal properties of the crust. RSCM results indicate that there are two peak temperature sequences separated by a major thrust within the LHS. The internal temperature sequence shows an inverted peak temperature gradient of 12 °C/km; in the external (southern) sequence, the peak temperatures are constant across the structural sequence. Thermo-kinematic modeling suggest that the thermochronologic and thermobarometric data are compatible with a two-stage scenario: an Early-Middle Miocene phase of fast overthrusting of a hot hanging wall over a downgoing footwall and inversion of the synkinematic isotherms, followed by the formation of the external duplex developed by dominant underthrusting and basal accretion. To reconcile our observations with the experimental data, we suggest that pervasive ductile deformation within the upper LHS and along the Main Central thrust zone at its top stopped at ~11 Ma at which time the deformation shifted and focused within the external duplex and the Main Boundary Thrust.

  6. Clonal growth and plant species abundance

    PubMed Central

    Herben, Tomáš; Nováková, Zuzana; Klimešová, Jitka

    2014-01-01

    Background and Aims Both regional and local plant abundances are driven by species' dispersal capacities and their abilities to exploit new habitats and persist there. These processes are affected by clonal growth, which is difficult to evaluate and compare across large numbers of species. This study assessed the influence of clonal reproduction on local and regional abundances of a large set of species and compared the predictive power of morphologically defined traits of clonal growth with data on actual clonal growth from a botanical garden. The role of clonal growth was compared with the effects of seed reproduction, habitat requirements and growth, proxied both by LHS (leaf–height–seed) traits and by actual performance in the botanical garden. Methods Morphological parameters of clonal growth, actual clonal reproduction in the garden and LHS traits (leaf-specific area – height – seed mass) were used as predictors of species abundance, both regional (number of species records in the Czech Republic) and local (mean species cover in vegetation records) for 836 perennial herbaceous species. Species differences in habitat requirements were accounted for by classifying the dataset by habitat type and also by using Ellenberg indicator values as covariates. Key Results After habitat differences were accounted for, clonal growth parameters explained an important part of variation in species abundance, both at regional and at local levels. At both levels, both greater vegetative growth in cultivation and greater lateral expansion trait values were correlated with higher abundance. Seed reproduction had weaker effects, being positive at the regional level and negative at the local level. Conclusions Morphologically defined traits are predictive of species abundance, and it is concluded that simultaneous investigation of several such traits can help develop hypotheses on specific processes (e.g. avoidance of self-competition, support of offspring) potentially underlying clonal growth effects on abundance. Garden performance parameters provide a practical approach to assessing the roles of clonal growth morphological traits (and LHS traits) for large sets of species. PMID:24482153

  7. RTNN: The New Parallel Machine in Zaragoza

    NASA Astrophysics Data System (ADS)

    Sijs, A. J. V. D.

    I report on the development of RTNN, a parallel computer designed as a 4^4 hypercube of 256 T9000 transputer nodes, each with 8 MB memory. The peak performance of the machine is expected to be 2.5 Gflops.

  8. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node operably connected to the first multiplexer whereby the second watch dog node can selectively communicate with individual ones of the computing nodes through the second and fourth networks. The branch is completed by a first load balancing node; and a second multiplexer connected between the first load balancing node and the first and second watch dog nodes, allowing the first load balancing node to selectively communicate with the first and second watch dog nodes.

  9. Convoy Protection under Multi-Threat Scenario

    DTIC Science & Technology

    2017-06-01

    14. SUBJECT TERMS antisubmarine warfare, convoy protection, screening, design of experiments, agent-based simulation 15. NUMBER OF...46 5. Scenarios 33–36 (Red Submarine Tactic-2) ...............................46 IV. DESIGN OF EXPERIMENT...47 C. NEARLY ORTHOGONAL LATIN HYPERCUBE DESIGN ............51 V. DATA ANALYSIS

  10. Designing a Course in Statistics for a Learning Health Systems Training Program

    ERIC Educational Resources Information Center

    Samsa, Gregory P.; LeBlanc, Thomas W.; Zaas, Aimee; Howie, Lynn; Abernethy, Amy P.

    2014-01-01

    The core pedagogic problem considered here is how to effectively teach statistics to physicians who are engaged in a "learning health system" (LHS). This is a special case of a broader issue--namely, how to effectively teach statistics to academic physicians for whom research--and thus statistics--is a requirement for professional…

  11. U.S. EPA, Pesticide Product Label, VOLCK SUPREME OIL SPRAY, 02/07/1983

    EPA Pesticide Factsheets

    2011-04-13

    ... I ;' o i i I .1' ~ •• ' I'r ~I!1r :: " "'lllil I \\ :: . II " I "t· . \\ II i I. j "I I: illll:h lHS P67810 " ' --, aw. I\\(X>'Fl~ ~~-t~l r.oJ 1:1',\\ I.",;, k t, ,-:! 3 7- j, () 3{ n:s ,;'-o!'~ ...

  12. Knowing Our Neighbors: Fundamental Properties of Nearby Stars

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer L.; Ianna, P. A.; Henry, T. J.; Begam, M. C.; Jao, W.; Subasavage, J. P., Jr.; Nearby Stars, Research Consortium on

    2007-12-01

    Although the stars within 25 pc of the Sun constitute the one stellar sample that we can aspire to know thoroughly, we continue identifying objects closer than 10 pc. We know even less about local substellar populations, especially planets. The Cerro Tololo Inter-American Observatory Parallax Investigation (CTIOPI) is observing 31 late-type, red dwarfs selected for my thesis as part of a larger effort to complete the nearby star census. Preliminary parallaxes substantiate distances less than 25 pc for at least 28 stars. Of these, LP 991-84, LHS 6167, and LP 876-10 may lie within 10 pc. Preliminary proper motions for all but three stars exceed 0.2” yr-1. One recently established neighbor, LP 869-26, also appears to be a new binary. Associated VRI photometry and spectroscopy are in progress as well. Many of these stars are potential targets for astrometric planet searches, such as the Space Interferometry Mission (SIM). In addition to confirming solar neighborhood membership, astrometry can discover brown dwarfs and planets. Time-series analyses of residuals to the UVa Southern Parallax Program (SPP) observations are contributing to frequency and distribution data for nearby substellar objects. In particular, LHS 288 displays an intriguing signal, which might be caused by a very low-mass companion. Twelve other SPP stars demonstrate no significant perturbations. Finally, re-analyzing the Leander McCormick Observatory photographic plates of Barnard's Star failed to detect any planets orbiting it. This study of more than 900 exposures was sensitive to bodies of 2.2 Jupiter masses or more. NSF grants AST 98-20711 and 05-07711, GSU, NASA-SIM, Litton Marine Systems, UVa, Hampden-Sydney College, US Naval Observatory, and the Levinson Fund of the Peninsula Community Foundation supported this research. The ANU Research School of Astronomy & Astrophysics allocated observing time generously. CTIOPI was an NOAO Survey Program and continues as part of the SMARTS Consortium.

  13. SOD2 gene polymorphism and response of oxidative stress parameters in young wrestlers to a three-month training.

    PubMed

    Jówko, Ewa; Gierczuk, Dariusz; Cieśliński, Igor; Kotowska, Jadwiga

    2017-05-01

    The aim of the study was to analyse the effect of Val 16Ala polymorphism in SOD2 gene on oxidative stress parameters and lipid profile of the blood during a three-month wrestling training. The study included 53 Polish young wrestlers. Blood samples were collected at the beginning of the programme and following three months of the training. The list of analysed parameters included erythrocyte and serum activities of superoxide dismutase (SOD), whole blood glutathione peroxidase (GPx) activity, total glutathione (tGSH) level, concentration of lipid hydroperoxides (LHs), total antioxidant capacity (TAC) and creatine kinase (CK) activity in the serum, as well as lipid profile parameters: triglycerides (TG), total cholesterol (TC), high-density (HDL-C), and low-density lipoprotein cholesterol (LDL-C). Three-month training resulted in a decrease in CK activity, an increase in serum SOD activity, as well as in unfavourable changes in serum lipid profile: an increase in TC, LDL-C, and TG, and a decrease in HDL-C. Aside from CK activity, all these changes seemed to be associated with presence of Val allele. Prior to the training programme, subjects with Ala/Ala genotype presented with lower levels of LHs, lower whole blood GPx activity, and lower serum concentrations of TC than the individuals with Ala/Val genotype. Both prior to and after three-month training, higher levels of tGSH were observed in Val/Val genotype as compared to Ala/Val genotype carriers. Moreover, multiple regression analysis demonstrated that SOD2 genotype was a significant predictor of pre-training whole blood GPx activity and erythrocyte SOD activity (Val/Val > Ala/Val > Ala/Ala). Altogether, these findings suggest that Val 16Ala polymorphism in SOD2 gene contributes to individual variability in oxidative stress status and lipid profile of the blood in young wrestlers, and may modulate biochemical response to training.

  14. Hypercube Expert System Shell - Applying Production Parallelism.

    DTIC Science & Technology

    1989-12-01

    possible processor organizations, or int( rconntction n thod,, for par- allel architetures . The following are examples of commonlv used interconnection...this timing analysis because match speed-up avaiiah& from production parallelism is proportional to the average number of affected produclions1 ( 11:5

  15. Design Optimization of a Centrifugal Fan with Splitter Blades

    NASA Astrophysics Data System (ADS)

    Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong

    2015-05-01

    Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.

  16. Experiences and results multitasking a hydrodynamics code on global and local memory machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.

    1987-01-01

    A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multitasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained.more » The problems of debugging on the different machines are also described.« less

  17. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  18. Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh

    2017-10-01

    Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.

  19. Between Hosts and Guests: Conditional Hospitality and Citizenship in an American Suburban School

    ERIC Educational Resources Information Center

    Shirazi, Roozbeh

    2018-01-01

    This article utilizes the idea of hospitality to explore how educative practices contribute to the making of citizens at Light Falls High School (LHS), a suburban American secondary school that professes a strong commitment to racial equity and global awareness. The data are derived from an ethnographic case study which took place in 2013-2014. I…

  20. Bidirectional impact of atrazine-induced elvations in progesterone (P4) on the LH Surge in the ovariectomized (OVX), estradiol (E2)-primed rat

    EPA Science Inventory

    Multiple daily exposures to the herbicide atrazine (ATZ) have been reported to suppress the luteinizing hormone surge (LHS) in female rats. Exposure has also been found to elevate P4 concentrations, and an increase in P4 is known to have a different directional effect on LH depen...

  1. Bidirectional impact of atrazine-induced elevations in progesterone (P4) on the LH surge in the ovariectomized, estradiol (E2)-primed rat

    EPA Science Inventory

    Multiple daily exposures to the herbicide atrazine (ATZ) have been reported to suppress the luteinizing hormone surge (LHS) in female rats. Exposure has also been found to elevate P4 concentrations, and an increase in P4 is known to have a different directional effect on LH depe...

  2. Group Solutions, Too! More Cooperative Logic Activities for Grades K-4. Teacher's Guide. LHS GEMS.

    ERIC Educational Resources Information Center

    Goodman, Jan M.; Kopp, Jaine

    There is evidence that structured cooperative logic is an effective way to introduce or reinforce mathematics concepts, explore thinking processes basic to both math and science, and develop the important social skills of cooperative problem-solving. This book contains a number of cooperative logic activities for grades K-4 in order to improve…

  3. Left ventricular filling under elevated left atrial pressure

    NASA Astrophysics Data System (ADS)

    Gaddam, Manikantam; Samaee, Milad; Santhanakrishnan, Arvind

    2017-11-01

    Left atrial pressure (LAP) is elevated in diastolic dysfunction, where left ventricular (LV) filling is impaired due to increase in ventricular stiffness. The impact of increasing LAP and LV stiffness on intraventricular filling hemodynamics remains unclear. We conducted particle image velocimetry and hemodynamics measurements in a left heart simulator (LHS) under increasing LAP and LV stiffness at a heart rate of 70 bpm. The LHS consisted of a flexible-walled LV physical model fitted within a fluid-filled chamber. LV wall motion was generated by a piston pump that imparted pressure fluctuations in the chamber. Resistance and compliance elements in the flow loop were adjusted to obtain bulk physiological hemodynamics in the least stiff LV model. Two LV models of increasing stiffness were subsequently tested under unchanged loop settings. LAP was varied between 5-20 mm Hg for each LV model, by adjusting fluid level in a reservoir upstream of the LV. For constant LV stiffness, increasing LAP lowered cardiac output (CO), while ejection fraction (EF) and E/A ratio were increased. For constant LAP, increasing LV stiffness lowered CO and EF, and increased E/A ratio. The implications of these altered hemodynamics on intraventricular filling vortex characteristics will be presented.

  4. On the nature of spectral features in peculiar DQ white dwarfs

    NASA Technical Reports Server (NTRS)

    Schmidt, Gary D.; Bergeron, P.; Fegley, Bruce

    1995-01-01

    Spectropolarimetric measurements are presented for the DQ white dwarfs ESO 439-162, LHS 1126, and G225-68, whose spectroscopic features in the optical have been interpreted in the past as pressure-shifted or magnetically shifted C2 Swan bands. The results convincingly demonstrate that none of these objects is strongly magnetic, with upper limits of 30, 3, and 2 MG respectively. Since Bergeron et al. (B 94) have recently ruled out the pressure-shift interpretation for LHS 1126 as well, we discuss alternative physical mechanisms for displacing the Swan bands. Although possibilities for explaining the observed shifts may exist, a comparison of the optical spectra (and that of a similar DQ star, LP 77-57) indicates that the locations and shapes of the profiles in all four objects are virtually identical. This last result suggests instead that a different molecular species could be responsible. A detailed chemical equilibrium analysis of H/He/C mixtures under the physical conditions encountered in the atmospheres of these peculiar objects reveals that C2H is a molecule preferentially formed in the photospheric regions. The nature and evolution of these objects are discussed.

  5. Advanced development of Pb-salt semiconductor lasers for the 8.0 to 15.0 micrometer spectral region

    NASA Technical Reports Server (NTRS)

    Linden, K. J.; Butler, J. F.; Nill, K. W.

    1977-01-01

    The technology was studied for producing Pb-salt diode lasers for the 8-51 micron spectral region suitable for use as local oscillators in a passive Laser Heterodyne Spectrometer (LHS). Consideration was given to long range NASA plans for the utilization of the passive LHS in a space shuttle environment. The general approach was to further develop the method of compositional interdiffusion (CID) recently reported, and used successfully at shorter wavelength. This technology was shown to provide an effective and reproducible method of producing a single-heterostructure (SH) diode of either the heterojunction or single-sided configuration. Performance specifications were exceeded in several devices, with single-ended CW power outputs as high as 0.88 milliwatts in a mode being achieved. The majority of the CID lasers fabricated had CW operating temperatures of over 60K; 30% of them operated CW above the boiling temperature of liquid nitrogen. CW operation above liquid nitrogen temperature was possible for wavelengths as long as 10.3 microns. Operation at 77K is significant with respect to space shuttle operations since its allows considerable simplification of cooling method.

  6. LHS 1610A: A Nearby Mid-M Dwarf with a Companion That Is Likely a Brown Dwarf

    NASA Astrophysics Data System (ADS)

    Winters, Jennifer G.; Irwin, Jonathan; Newton, Elisabeth R.; Charbonneau, David; Latham, David W.; Han, Eunkyu; Muirhead, Philip S.; Berlind, Perry; Calkins, Michael L.; Esquerdo, Gil

    2018-03-01

    We present the spectroscopic orbit of LHS 1610A, a newly discovered single-lined spectroscopic binary with a trigonometric distance placing it at 9.9 ± 0.2 pc. We obtained spectra with the TRES instrument on the 1.5 m Tillinghast Reflector at the Fred Lawrence Whipple Observatory located on Mt. Hopkins in AZ. We demonstrate the use of the TiO molecular bands at 7065–7165 Å to measure radial velocities and achieve an average estimated velocity uncertainty of 28 m s‑1. We measure the orbital period to be 10.6 days and calculate a minimum mass of 44.8 ± 3.2 M Jup for the secondary, indicating that it is likely a brown dwarf. We place an upper limit to 3σ of 2500 K on the effective temperature of the companion from infrared spectroscopic observations using IGRINS on the 4.3 m Discovery Channel Telescope. In addition, we present a new photometric rotation period of 84.3 days for the primary star using data from the MEarth-South Observatory, with which we show that the system does not eclipse.

  7. [Analgesic effect of intra-artcular fentanyl in knee arthroscopy to treat patellofemoral lateral hyperpressure syndrome].

    PubMed

    Gutiérrez-Mendoza, Israel; Pérez-Correa, José J; Serna-Vela, Francisco; Góngora-Ortega, Javier; Vilchis-Huerta, Ventura; Pérez-Guzmán, Carlos; Hernández-Garduño, Eduardo; Vidal-Rodríguez, Francisco Alberto

    2009-01-01

    During arthroscopy for the treatment of patellofemoral lateral hyper-pressure syndrome (LHS), intra-articular morphine or its derivatives (fentanyl) may reduce postoperative pain when combined with anesthetics. We therefore decided to determine whether adding fentanyl to epinephrine and bupivacaine produced an increased analgesia. We randomly distributed 40 patients into two groups. The experimental group (n=20) was given 0.5% bupivacaine (2 mg/kg), epinephrine (100 microg) and fentanyl (2.5 microg/kg). The control group (n=20) received 0.5% bupivacaine (2 mg/kg) and epinephrine (100 microg). Patients underwent chondroplasty and retinacular release, and we assessed pain, time of analgesia and postoperative range of motion at postoperative hours 6 and 24. The age and the grade of patellofemoral chondromalacia (PFC) were similar in both groups (p > 0.05). No differences were found in pain and ranges of motion intraoperatively and at postoperative hours 6 and 24 (p > 0.05) between both groups. The postoperative analgesia time was similar (p > 0.05). Adding intra-articular fentanyl to the combination of epinephrine plus bupivacaine did not decrease pain, and did not increase neither the analgesia time nor the range of motion in patients with LHS undergoing knee arthroscopy.

  8. VizieR Online Data Catalog: MIR-selected quasar parameters (Dai+, 2014)

    NASA Astrophysics Data System (ADS)

    Dai, Y. S.; Elvis, M.; Bergeron, J.; Fazio, G. G.; Huang, J.-S.; Wilkes, B. J.; Willmer, C. N. A.; Omont, A.; Papovich, C.

    2017-03-01

    The combined MIR 24 um and optical selection for this survey was designed to detect objects with luminous torus/nucleus and not biased against dusty hosts. The MIR selection allows for the detection of hot dust (a few hundred Kelvin) at the redshifts z ~ 1.5; while optical follow-up spectroscopically identified the BEL objects, confirming their unobscured (type 1) quasar nature. This MIR selection also allows for a far-infrared (FIR) cross-match to look for cool dust for SMBH-host studies, as demonstrated in Dai et al. (2012ApJ...753...33D). We select Spitzer MIPS (Rieke et al. 2004ApJS..154...25R) 24 um sources from the SWIRE survey in the ~22 deg2 LHS field centered at RA=10:46:48, DE=57:54:00 (Lonsdale et al. 2003PASP..115..897L). The SDSS imaging also covers the LHS region to r = 22.2 at 95% detection repeatability, but can go as deep as r = 23. All magnitudes are taken from the SDSS photoObj catalog in DR7, which are already corrected for Galactic extinction according to Schlegel et al. (1998ApJ...500..525S). (6 data files).

  9. Characterization of aqueous two phase systems by combining lab-on-a-chip technology with robotic liquid handling stations.

    PubMed

    Amrhein, Sven; Schwab, Marie-Luise; Hoffmann, Marc; Hubbuch, Jürgen

    2014-11-07

    Over the last decade, the use of design of experiment approaches in combination with fully automated high throughput (HTP) compatible screenings supported by robotic liquid handling stations (LHS), adequate fast analytics and data processing has been developed in the biopharmaceutical industry into a strategy of high throughput process development (HTPD) resulting in lower experimental effort, sample reduction and an overall higher degree of process optimization. Apart from HTP technologies, lab-on-a-chip technology has experienced an enormous growth in the last years and allows further reduction of sample consumption. A combination of LHS and lab-on-a-chip technology is highly desirable and realized in the present work to characterize aqueous two phase systems with respect to tie lines. In particular, a new high throughput compatible approach for the characterization of aqueous two phase systems regarding tie lines by exploiting differences in phase densities is presented. Densities were measured by a standalone micro fluidic liquid density sensor, which was integrated into a liquid handling station by means of a developed generic Tip2World interface. This combination of liquid handling stations and lab-on-a-chip technology enables fast, fully automated, and highly accurate density measurements. The presented approach was used to determine the phase diagram of ATPSs composed of potassium phosphate (pH 7) and polyethylene glycol (PEG) with a molecular weight of 300, 400, 600 and 1000 Da respectively in the presence and in the absence of 3% (w/w) sodium chloride. Considering the whole ATPS characterization process, two complete ATPSs could be characterized within 24h, including four runs per ATPS for binodal curve determination (less than 45 min/run), and tie line determination (less than 45 min/run for ATPS preparation and 8h for density determination), which can be performed fully automated over night without requiring man power. The presented methodology provides a cost, time and material effective approach for characterization of ATPS phase diagram on base on highly accurate and comprehensive data. By this means the derived data opens the door for a more detailed description of ATPS towards generating mechanistic based models, since molecular approaches such as MD simulations or molecular descriptions along the line of QSAR heavily rely on accurate and comprehensive data. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Knowing Our Neighbors: Six Young, Nearby Systems

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer L.; Lurie, John C.; Ianna, Philip A.; Riedel, Adric R.; Winters, Jennifer G.; Finch, Charlie T.; Jao, Wei-Chun; Subasavage, John P.; Henry, Todd J.

    2016-06-01

    Obtaining a well-understood, volume-limited (and ultimately volume-complete) sample of nearby stars is necessary for determining a host of astrophysical quantities, including the stellar luminosity and mass functions, the stellar velocity distribution, and the stellar multiplicity fraction. Furthermore, such a sample provides insight into the local star formation history. Towards that end, the Research Consortium on Nearby Stars (RECONS) measures trigonometric parallaxes to establish which systems truly lie within the 25-pc radius of the Solar Neighborhood. Recent measurements with the CTIO/SMARTS 0.9-m telescope establish six new systems as members of the Solar Neighborhood and also potential members of young moving groups based on (a) CTIOPI astrometry and (b) radial velocities from the literature, where available: ● G 75-35 at a distance of 11.8±0.2 pc and G 161-71 at 13.5±0.3 pc are possible members of the Argus Association● LP 888-18 at a distance of 12.5±0.2 pc is a member of the AB Doradus Moving Group, while LP 834-32 at 17.3±0.6 pc and LP 870-65 at 18.2±0.5 pc are possible group members● LHS 6167AB at a distance of 9.68±0.09 pc is a possible member of the Hercules-Lyra Moving Group. To characterize these systems further, RECONS obtained VRI photometry for each, which range from 12.44-18.81 mag. in V. LP 834-32, LHS 6167AB, and G 161-71 demonstrated significant long-term variability in V-band; the first two appear to have flared in 2011 and 2012, respectively. Furthermore, CTIOPI 1.5-m spectroscopy identifies these systems as M3.5-M8.0 dwarfs. G 161-71 displayed strong Hα emission but weak sodium and potassium features.The Solar Neighborhood contains both young and old stars that can be observed more easily than their more distant counterparts, which allows their characteristics to be studied in greater detail.NSF grants AST 05-07711 and AST 09-08402, NASA-SIM, Georgia State University, the University of Virginia, Hampden-Sydney College, and the Levinson Fund of the Peninsula Community Foundation supported this research. CTIOPI was an NOAO Survey Program and continues as part of the SMARTS Consortium. We thank the SMARTS Consortium and the CTIO staff, who enable the small telescope operations at CTIO.

  11. Risk factors affecting injury severity determined by the MAIS score.

    PubMed

    Ferreira, Sara; Amorim, Marco; Couto, Antonio

    2017-07-04

    Traffic crashes result in a loss of life but also impact the quality of life and productivity of crash survivors. Given the importance of traffic crash outcomes, the issue has received attention from researchers and practitioners as well as government institutions, such as the European Commission (EC). Thus, to obtain detailed information on the injury type and severity of crash victims, hospital data have been proposed for use alongside police crash records. A new injury severity classification based on hospital data, called the maximum abbreviated injury scale (MAIS), was developed and recently adopted by the EC. This study provides an in-depth analysis of the factors that affect injury severity as classified by the MAIS score. In this study, the MAIS score was derived from the International Classification of Diseases. The European Union adopted an MAIS score equal to or greater than 3 as the definition for a serious traffic crash injury. Gains are expected from using both police and hospital data because the injury severities of the victims are detailed by medical staff and the characteristics of the crash and the site of its occurrence are also provided. The data were obtained by linking police and hospital data sets from the Porto metropolitan area of Portugal over a 6-year period (2006-2011). A mixed logit model was used to understand the factors that contribute to the injury severity of traffic victims and to explore the impact of these factors on injury severity. A random parameter approach offers methodological flexibility to capture individual-specific heterogeneity. Additionally, to understand the importance of using a reliable injury severity scale, we compared MAIS with length of hospital stay (LHS), a classification used by several countries, including Portugal, to officially report injury severity. To do so, the same statistical technique was applied using the same variables to analyze their impact on the injury severity classified according to LHS. This study showed the impact of variables, such as the presence of blood alcohol, the use of protection devices, the type of crash, and the site characteristics, on the injury severity classified according to the MAIS score. Additionally, the sex and age of the victims were analyzed as risk factors, showing that elderly and male road users are highly associated with MAIS 3+ injuries. The comparison between the marginal effects of the variables estimated by the MAIS and LHS models showed significant differences. In addition to the differences in the magnitude of impact of each variable, we found that the impact of the road environment variable was dependent on the injury severity classification. The differences in the effects of risk factors between the classifications highlight the importance of using a reliable classification of injury severity. Additionally, the relationship between LHS and MAIS levels is quite different among countries, supporting the previous conclusion that bias is expected in the assessment of risk factors if an injury severity classification other than MAIS is used.

  12. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    NASA Astrophysics Data System (ADS)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  13. Application of positive-real functions in hyperstable discrete model-reference adaptive system design.

    NASA Technical Reports Server (NTRS)

    Karmarkar, J. S.

    1972-01-01

    Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.

  14. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  15. NASA Tech Briefs, November/December 1986, Special Edition

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Topics: Computing: The View from NASA Headquarters; Earth Resources Laboratory Applications Software: Versatile Tool for Data Analysis; The Hypercube: Cost-Effective Supercomputing; Artificial Intelligence: Rendezvous with NASA; NASA's Ada Connection; COSMIC: NASA's Software Treasurehouse; Golden Oldies: Tried and True NASA Software; Computer Technical Briefs; NASA TU Services; Digital Fly-by-Wire.

  16. The Real Reasons for Seasons--Sun-Earth Connections: Unraveling Misconceptions about the Earth and Sun. Grades 6-8. Teacher's Guide. LHS GEMS.

    ERIC Educational Resources Information Center

    Gould, Alan; Willard, Carolyn; Pompea, Stephen

    This guide is aimed at helping students arrive at a clear understanding of seasons as they investigate the connections between the sun and the earth. Activities include: (1) "Name the Season"; (2) "Sun-Earth Survey"; (3) "Trip to the Sun"; (4) "What Shape is Earth's Orbit?"; (5) "Temperatures around the…

  17. Evaluation of multimodal segmentation based on 3D T1-, T2- and FLAIR-weighted images - the difficulty of choosing.

    PubMed

    Lindig, Tobias; Kotikalapudi, Raviteja; Schweikardt, Daniel; Martin, Pascal; Bender, Friedemann; Klose, Uwe; Ernemann, Ulrike; Focke, Niels K; Bender, Benjamin

    2018-04-15

    Voxel-based morphometry is still mainly based on T1-weighted MRI scans. Misclassification of vessels and dura mater as gray matter has been previously reported. Goal of the present work was to evaluate the effect of multimodal segmentation methods available in SPM12, and their influence on identification of age related atrophy and lesion detection in epilepsy patients. 3D T1-, T2- and FLAIR-images of 77 healthy adults (mean age 35.8 years, 19-66 years, 45 females), 7 patients with malformation of cortical development (MCD) (mean age 28.1 years,19-40 years, 3 females), and 5 patients with left hippocampal sclerosis (LHS) (mean age 49.0 years, 25-67 years, 3 females) from a 3T scanner were evaluated. Segmentation based on T1-only, T1+T2, T1+FLAIR, T2+FLAIR, and T1+T2+FLAIR were compared in the healthy subjects. Clinical VBM results based on the different segmentation approaches for MCD and for LHS were compared. T1-only segmentation overestimated total intracranial volume by about 80ml compared to the other segmentation methods. This was due to misclassification of dura mater and vessels as GM and CSF. Significant differences were found for several anatomical regions: the occipital lobe, the basal ganglia/thalamus, the pre- and postcentral gyrus, the cerebellum, and the brainstem. None of the segmentation methods yielded completely satisfying results for the basal ganglia/thalamus and the brainstem. The best correlation with age could be found for the multimodal T1+T2+FLAIR segmentation. Highest T-scores for identification of LHS were found for T1+T2 segmentation, while highest T-scores for MCD were dependent on lesion and anatomical location. Multimodal segmentation is superior to T1-only segmentation and reduces the misclassification of dura mater and vessels as GM and CSF. Depending on the anatomical region and the pathology of interest (atrophy, lesion detection, etc.), different combinations of T1, T2 and FLAIR yield optimal results. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Contrasting seasonal and aseasonal environments across stages of the annual cycle in the Rufous-collared Sparrow, Zonotrichia capensis: differences in endocrine function, proteome, and body condition.

    PubMed

    González-Gómez, Paulina L; Echeverria, Valentina; Estades, Cristian F; Perez, Jonathan H; Krause, Jesse S; Sabat, Pablo; Li, Jonathon; Kültz, Dietmar; Wingfield, John C

    2018-05-09

    1.The timing and duration of life history stages (LHS) within the annual cycle can be affected by local environmental cues which are integrated through endocrine signaling mechanisms and changes in protein function. Most animals express a single LHS within a given period of the year because synchronous expression of LHSs is thought to be too costly energetically. However, in very rare and extremely stable conditions, breeding and molt have been observed to overlap extensively in Rufous-collared sparrows (Zonotrichia capensis) living in valleys of the Atacama Desert - one of the most stable and aseasonal environments on Earth. 2.To examine how LHS traits at different levels of organization are affected by environmental variability we compared the temporal organization and duration of LHSs in populations in the Atacama Desert with those in the semiarid Fray Jorge National Park in the north of Chile - an extremely seasonal climate but with unpredictable droughts and heavy rainy seasons. 3.We studied the effects of environmental variability on morphological variables related to body condition, endocrine traits, and proteome. Birds living in the seasonal environment had a strict temporal division LHSs while birds living in the aseasonal environment failed to maintain a temporal division of LHSs resulting in direct overlap of breeding and molt. Further, higher circulating glucocorticoids and androgen concentrations were found in birds from seasonal compared to aseasonal populations. Despite these differences, body condition variables and protein expression were not related to the degree of seasonality but rather showed a strong relationship with hormone levels. 4.These results suggest that animals adjust to their environment through changes in behavioral and endocrine traits and may be limited by less labile traits such as morphological variables or expression of specific proteins under certain circumstances. These data on free-living birds shed light on how different levels of life history organization within an individual are linked to increasing environmental heterogeneity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Low protein and high-energy diet: a possible natural cause of fatty liver hemorrhagic syndrome in caged White Leghorn laying hens.

    PubMed

    Rozenboim, I; Mahato, J; Cohen, N A; Tirosh, O

    2016-03-01

    Fatty liver hemorrhagic syndrome (FLHS) is a metabolic condition of chicken and other birds caused by diverse nutritional, hormonal, environmental, and metabolic factors. Here we studied the effect of different diet composition on the induction of FLHS in single comb White Leghorn (WL) Hy-line laying hens. Seventy six (76) young WL (26 wks old) laying hens and 69 old hens (84 wks old) of the same breed were each divided into 4 treatment groups and provided 4 different diet treatments. The diet treatments included: control (C), 17.5% CP, 3.5% fat (F); normal protein, high fat (HF), 17.5% CP, 7% F; low protein, normal fat (LP), 13% CP, 3.5% F; and low protein, high fat (LPHF), 13% CP, 6.5% F. The diets containing high fat also had a higher ME of 3,000 kcal/kg of feed while the other 2 diets with normal fat had a regular lower amount of ME (2750 kcal/kg). Hen-day egg production (HDEP), ADFI, BW, egg weight, plasma enzymes indicating liver damage (alkaline phosphatase [ALP], aspartate aminotransferase [AST], gamma-glutamyl transferase [GGT]), liver and abdominal fat weight, liver color score (LCS), liver hemorrhagic score (LHS), liver fat content (LFC), liver histological examination, lipid peroxidation product in the liver, and genes indicating liver inflammation were evaluated. HDEP, ADFI, BW, and egg weight were significantly decreased in the LPHF diet group, while egg weight was also decreased in the LP diet group. In the young hens (LPHF group), ALP was found significantly higher at 30 d of diet treatment and was numerically higher throughout the experiment, while AST was significantly higher at 105 d of treatment. LCS, LHS, and LFC were significantly higher in young hens on the LPHF diet treatment. A liver histological examination shows more lipid vacuolization in the LPHF treatment diet. HF or LP alone had no significant effect on LFC, LHS, or LCS. We suggest that LP in the diet with higher ME from fat can be a possible natural cause for predisposing laying hens to FLHS. © 2016 Poultry Science Association Inc.

  20. Clonal growth and plant species abundance.

    PubMed

    Herben, Tomáš; Nováková, Zuzana; Klimešová, Jitka

    2014-08-01

    Both regional and local plant abundances are driven by species' dispersal capacities and their abilities to exploit new habitats and persist there. These processes are affected by clonal growth, which is difficult to evaluate and compare across large numbers of species. This study assessed the influence of clonal reproduction on local and regional abundances of a large set of species and compared the predictive power of morphologically defined traits of clonal growth with data on actual clonal growth from a botanical garden. The role of clonal growth was compared with the effects of seed reproduction, habitat requirements and growth, proxied both by LHS (leaf-height-seed) traits and by actual performance in the botanical garden. Morphological parameters of clonal growth, actual clonal reproduction in the garden and LHS traits (leaf-specific area - height - seed mass) were used as predictors of species abundance, both regional (number of species records in the Czech Republic) and local (mean species cover in vegetation records) for 836 perennial herbaceous species. Species differences in habitat requirements were accounted for by classifying the dataset by habitat type and also by using Ellenberg indicator values as covariates. After habitat differences were accounted for, clonal growth parameters explained an important part of variation in species abundance, both at regional and at local levels. At both levels, both greater vegetative growth in cultivation and greater lateral expansion trait values were correlated with higher abundance. Seed reproduction had weaker effects, being positive at the regional level and negative at the local level. Morphologically defined traits are predictive of species abundance, and it is concluded that simultaneous investigation of several such traits can help develop hypotheses on specific processes (e.g. avoidance of self-competition, support of offspring) potentially underlying clonal growth effects on abundance. Garden performance parameters provide a practical approach to assessing the roles of clonal growth morphological traits (and LHS traits) for large sets of species. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.

    Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less

  2. Dynamics of hepatitis C under optimal therapy and sampling based analysis

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2013-08-01

    We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.

  3. The Impact of Stress on the Life History Strategies of African American Adolescents: Cognitions, Genetic Moderation, and the Role of Discrimination

    ERIC Educational Resources Information Center

    Gibbons, Frederick X.; Roberts, Megan E.; Gerrard, Meg; Li, Zhigang; Beach, Steven R. H.; Simons, Ronald L.; Weng, Chih-Yuan; Philibert, Robert A.

    2012-01-01

    The impact of 3 different sources of stress--environmental, familial (e.g., low parental investment), and interpersonal (i.e., racial discrimination)--on the life history strategies (LHS) and associated cognitions of African American adolescents were examined over an 11-year period (5 waves, from age 10.5 to 21.5). Analyses indicated that each one…

  4. A Formal Model of Ambiguity and its Applications in Machine Translation

    DTIC Science & Technology

    2010-01-01

    structure indicates linguisti- cally implausible segmentation that might be generated using dictionary - driven approaches...derivation. As was done in the monolingual case, the functions LHS, RHSi, RHSo and υ can be extended to a derivation δ. D(q) where q ∈V denotes the... monolingual parses. My algorithm runs more efficiently than O(n6) with many grammars (including those that required using heuristic search with other parsers

  5. The Cross-Cultural Societal Response to SCI: Health and Related Systems.

    PubMed

    Pacheco, Diana; Gross-Hemmi, Mirja H

    2017-02-01

    The Learning Health System for Spinal Cord Injury (LHS-SCI) is an initiative aligned with the World Health Organization's (WHO) Global Disability Action Plan. Based on the outcomes of this initiative, countries will be able to shape their health systems to better respond to the needs of persons with SCI. This paper describes and compares the macroeconomic situation and societal response to SCI across 27 countries from all 6 WHO regions that will participate in the LHS-SCI initiative. A concurrent mixed-methods study was conducted to identify key indicators that describe the situation of persons with SCI, the general societal response, the health and rehabilitation system, and the experience for a SCI person after discharge from inpatient rehabilitation. A strong correlation was found between the efficiency of a healthcare system and the amount a country invests in health. Higher availability of resources does not necessarily imply that unrestricted access to the healthcare system is warranted. Variations in the health systems were found for various domains of the health and rehabilitation systems. The evaluation and comparative analysis of the societal response to SCI raise the awareness of the need of more standardized data to identify current needs and gaps in the quality and access to SCI-specific health system.

  6. Lateral topological crystalline insulator heterostructure

    NASA Astrophysics Data System (ADS)

    Sun, Qilong; Dai, Ying; Niu, Chengwang; Ma, Yandong; Wei, Wei; Yu, Lin; Huang, Baibiao

    2017-06-01

    The emergence of lateral heterostructures fabricated by two-dimensional building blocks brings many exciting realms in material science and device physics. Enriching available nanomaterials for creating such heterostructures and enabling the underlying new physics is highly coveted for the integration of next-generation devices. Here, we report a breakthrough in lateral heterostructure based on the monolayer square transition-metal dichalcogenides MX2 (M  =  W, X  =  S/Se) modules. Our results reveal that the MX2 lateral heterostructure (1S-MX2 LHS) can possess excellent thermal and dynamical stability. Remarkably, the highly desired two-dimensional topological crystalline insulator phase is confirmed by the calculated mirror Chern number {{n}\\text{M}}=-1 . A nontrivial band gap of 65 meV is obtained with SOC, indicating the potential for room-temperature observation and applications. The topologically protected edge states emerge at the edges of two different nanoribbons between the bulk band gap, which is consistent with the mirror Chern number. In addition, a strain-induced topological phase transition in 1S-MX2 LHS is also revealed, endowing the potential utilities in electronics and spintronics. Our predictions not only introduce new member and vitality into the studies of lateral heterostructures, but also highlight the promise of lateral heterostructure as appealing topological crystalline insulator platforms with excellent stability for future devices.

  7. Degree versus direction: a comparison of four handedness classification schemes through the investigation of lateralised semantic priming.

    PubMed

    Kaploun, Kristen A; Abeare, Christopher A

    2010-09-01

    Four classification systems were examined using lateralised semantic priming in order to investigate whether degree or direction of handedness better captures the pattern of lateralised semantic priming. A total of 85 participants completed a lateralised semantic priming task and three handedness questionnaires. The classification systems tested were: (1) the traditional right- vs left-handed (RHs vs LHs); (2) a four-factor model of strong and weak right- and left-handers (SRHs, WRHs, SLHs, WLHs); (3) strong- vs mixed-handed (SHs vs MHs); and (4) a three-factor model of consistent left- (CLHs), inconsistent left- (ILHs), and consistent right-handers (CRHs). Mixed-factorial ANOVAs demonstrated significant visual field (VF) by handedness interactions for all but the third model. Results show that LHs, SLHs, CLHs, and ILHs responded faster to LVF targets, whereas RHs, SRHs, and CRHs responded faster to RVF targets; no significant VF by handedness interaction was found between SHs and MHs. The three-factor model better captures handedness group divergence on lateralised semantic priming by incorporating the direction of handedness as well as the degree. These findings help explain some of the variance in language lateralisation, demonstrating that direction of handedness is as important as degree. The need for greater consideration of handedness subgroups in laterality research is highlighted.

  8. Einstein-Podolsky-Rosen correlations and Bell correlations in the simplest scenario

    NASA Astrophysics Data System (ADS)

    Quan, Quan; Zhu, Huangjun; Fan, Heng; Yang, Wen-Li

    2017-06-01

    Einstein-Podolsky-Rosen (EPR) steering is an intermediate type of quantum nonlocality which sits between entanglement and Bell nonlocality. A set of correlations is Bell nonlocal if it does not admit a local hidden variable (LHV) model, while it is EPR nonlocal if it does not admit a local hidden variable-local hidden state (LHV-LHS) model. It is interesting to know what states can generate EPR-nonlocal correlations in the simplest nontrivial scenario, that is, two projective measurements for each party sharing a two-qubit state. Here we show that a two-qubit state can generate EPR-nonlocal full correlations (excluding marginal statistics) in this scenario if and only if it can generate Bell-nonlocal correlations. If full statistics (including marginal statistics) is taken into account, surprisingly, the same scenario can manifest the simplest one-way steering and the strongest hierarchy between steering and Bell nonlocality. To illustrate these intriguing phenomena in simple setups, several concrete examples are discussed in detail, which facilitates experimental demonstration. In the course of study, we introduce the concept of restricted LHS models and thereby derive a necessary and sufficient semidefinite-programming criterion to determine the steerability of any bipartite state under given measurements. Analytical criteria are further derived in several scenarios of strong theoretical and experimental interest.

  9. Release behavior and toxicity profiles towards A549 cell lines of ciprofloxacin from its layered zinc hydroxide intercalation compound.

    PubMed

    Abdul Latip, Ahmad Faiz; Hussein, Mohd Zobir; Stanslas, Johnson; Wong, Charng Choon; Adnan, Rohana

    2013-01-01

    Layered hydroxides salts (LHS), a layered inorganic compound is gaining attention in a wide range of applications, particularly due to its unique anion exchange properties. In this work, layered zinc hydroxide nitrate (LZH), a family member of LHS was intercalated with anionic ciprofloxacin (CFX), a broad spectrum antibiotic via ion exchange in a mixture solution of water:ethanol. Powder x-ray diffraction (XRD), Fourier transform infrared (FTIR) and thermogravimetric analysis (TGA) confirmed the drug anions were successfully intercalated in the interlayer space of LZH. Specific surface area of the obtained compound was increased compared to that of the host due to the different pore textures between the two materials. CFX anions were slowly released over 80 hours in phosphate-buffered saline (PBS) solution due to strong interactions that occurred between the intercalated anions and the host lattices. The intercalation compound demonstrated enhanced antiproliferative effects towards A549 cancer cells compared to the toxicity of CFX alone. Strong host-guest interactions between the LZH lattice and the CFX anion give rise to a new intercalation compound that demonstrates sustained release mode and enhanced toxicity effects towards A549 cell lines. These findings should serve as foundations towards further developments of the brucite-like host material in drug delivery systems.

  10. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  11. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  12. 3D Navier-Stokes Flow Analysis for a Large-Array Multiprocessor

    DTIC Science & Technology

    1989-04-17

    computer, Alliant’s FX /8, Intel’s Hypercube, and Encore’s Multimax. Unfortunately, the current algorithms have been developed pri- marily for SISD machines...Reversing and Thrust-Vectoring Nozzle Flows," Ph.D. Dissertation in the Dept. of Aero. and Astro ., Univ. of Wash., Washington, 1986. [11] Anderson

  13. Extending Orthogonal and Nearly Orthogonal Latin Hypercube Designs for Computer Simulation and Experimentation

    DTIC Science & Technology

    2006-12-01

    The code was initially developed to be run within the netBeans IDE 5.04 running J2SE 5.0. During the course of the development, Eclipse SDK 3.2...covers the results from the research. Chapter V concludes and recommends future research. 4 netBeans

  14. Distributed Memory Compiler Methods for Irregular Problems - Data Copy Reuse and Runtime Partitioning

    DTIC Science & Technology

    1991-09-01

    addition, support for Saltz was provided by NSF from NSF Grant ASC-8819374. i 1, introduction Over the past fewyers, ,we have devoped -methods needed to... network . In Third Conf. on Hypercube Concurrent Computers and Applications, pages 241-27278, 1988. [17] G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel

  15. Taste symmetry breaking with hypercubic-smeared staggered fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bae, Taegil; Adams, David H.; Kim, Hyung-Jin

    2008-05-01

    We study the impact of hypercubic (HYP) smearing on the size of taste-breaking for staggered fermions, comparing to unimproved and to asqtad-improved staggered fermions. As in previous studies, we find a substantial reduction in taste-breaking compared to unimproved staggered fermions (by a factor of 4-7 on lattices with spacing a{approx_equal}0.1 fm). In addition, we observe that discretization effects of next-to-leading order in the chiral expansion (O(a{sup 2}p{sup 2})) are markedly reduced by HYP smearing. Compared to asqtad valence fermions, we find that taste-breaking in the pion spectrum is reduced by a factor of 2.5-3, down to a level comparable tomore » the expected size of generic O(a{sup 2}) effects. Our results suggest that, once one reaches a lattice spacing of a{approx_equal}0.09 fm, taste-breaking will be small enough after HYP smearing that one can use a modified power counting in which O(a{sup 2})<

  16. A parallel expert system for the control of a robotic air vehicle

    NASA Technical Reports Server (NTRS)

    Shakley, Donald; Lamont, Gary B.

    1988-01-01

    Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.

  17. Comparison of empirical strategies to maximize GENEHUNTER lod scores.

    PubMed

    Chen, C H; Finch, S J; Mendell, N R; Gordon, D

    1999-01-01

    We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.

  18. Hypercat - Hypercube of AGN tori

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Lopez-Rodriguez, Enrique; Ichikawa, Kohei; Levenson, Nancy A.; Packham, Christopher C.

    2018-06-01

    AGN unification and observations hold that a dusty torus obscures the central accretion engine along some lines of sight. SEDs of dust tori have been modeled for a long time, but resolved emission morphologies have not been studied in much detail, because resolved observations are only possible recently (VLTI,ALMA) and in the near future (TMT,ELT,GMT). Some observations challenge a simple torus model, because in several objects most of MIR emission appears to emanate from polar regions high above the equatorial plane, i.e. not where the dust supposedly resides.We introduce our software framework and hypercube of AGN tori (Hypercat) made with CLUMPY (www.clumpy.org), a large set of images (6 model parameters + wavelength) to facilitate studies of emission and dust morphologies. We make use of Hypercat to study the morphological properties of the emission and dust distributions as function of model parameters. We find that a simple clumpy torus can indeed produce 10-micron emission patterns extended in polar directions, with extension ratios compatible with those found in observations. We are able to constrain the range of parameters that produce such morphologies.

  19. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  20. Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

    1995-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

  1. Dynamics of upper mantle rocks decompression melting above hot spots under continental plates

    NASA Astrophysics Data System (ADS)

    Perepechko, Yury; Sorokin, Konstantin; Sharapov, Victor

    2014-05-01

    Numeric 2D simulation of the decompression melting above the hot spots (HS) was accomplished under the following conditions: initial temperature within crust mantle section was postulated; thickness of the metasomatized lithospheric mantle is determined by the mantle rheology and position of upper asthenosphere boundary; upper and lower boundaries were postulated to be not permeable and the condition for adhesion and the distribution of temperature (1400-2050°C); lateral boundaries imitated infinity of layer. Sizes and distribution of lateral points, their symmetry, and maximum temperature varied between the thermodynamic condition for existences of perovskite - majorite transition and its excess above transition temperature. Problem was solved numerically a cell-vertex finite volume method for thermo hydrodynamic problems. For increasing convergence of iterative process the method of lower relaxation with different value of relaxation parameter for each equation was used. The method of through calculation was used for the increase in the computing rate for the two-layered upper mantle - lithosphere system. Calculated region was selected as 700 x (2100-4900) km. The time step for the study of the asthenosphere dynamics composed 0.15-0.65 Ma. The following factors controlling the sizes and melting degree of the convective upper mantle, are shown: a) the initial temperature distribution along the section of upper mantleb) sizes and the symmetry of HS, c) temperature excess within the HS above the temperature on the upper and lower mantle border TB=1500-2000oC with 5-15% deviation but not exceed 2350oC. It is found, that appearance of decompression melting with HS presence initiate primitive mantle melting at TB > of 1600oC. Initial upper mantle heating influence on asthenolens dimensions with a constant HS size is controlled mainly by decompression melting degree. Thus, with lateral sizes of HS = 400 km the decompression melting appears at TB > 1600oC and HS temperature (THS) > 1900oC asthenolens size ~700 km. When THS = of 2000oC the maximum melting degree of the primitive mantle is near 40%. An increase in the TB > 1900oC the maximum degree of melting could rich 100% with the same size of decompression melting zone (700 km). We examined decompression melting above the HS having LHS = 100 km - 780 km at a TB 1850- 2100oC with the thickness of lithosphere = 100 km.It is shown that asthenolens size (Lln) does not change substantially: Lln=700 km at LHS = of 100 km; Lln= 800 km at LHS = of 780 km. In presence of asymmetry of large HS the region of advection is developed above the HS maximum with the formation of asymmetrical cell. Influence of lithospheric plate thicknesses on appearance and evolution of asthenolens above the HS were investigated for the model stepped profile for the TB ≤ of 1750oS with Lhs = 100km and maximum of THS =2350oC. With an increase of TB the Lln difference beneath lithospheric steps is leveled with retention of a certain difference to melting degrees and time of the melting appearance a top of the HS. RFBR grant 12-05-00625.

  2. Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.

    PubMed

    Fu, Zhaohao; Hao, Lingxin

    2018-01-15

    We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.

  3. Agent-based modeling of China's rural-urban migration and social network structure

    NASA Astrophysics Data System (ADS)

    Fu, Zhaohao; Hao, Lingxin

    2018-01-01

    We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.

  4. Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations

    NASA Astrophysics Data System (ADS)

    Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray

    2017-09-01

    The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.

  5. The Influence of PV Module Materials and Design on Solder Joint Thermal Fatigue Durability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah

    Finite element model (FEM) simulations have been performed to elucidate the effect of flat plate photovoltaic (PV) module materials and design on PbSn eutectic solder joint thermal fatigue durability. The statistical method of Latin Hypercube sampling was employed to investigate the sensitivity of simulated damage to each input variable. Variables of laminate material properties and their thicknesses were investigated. Using analysis of variance, we determined that the rate of solder fatigue was most sensitive to solder layer thickness, with copper ribbon and silicon thickness being the next two most sensitive variables. By simulating both accelerated thermal cycles (ATCs) and PVmore » cell temperature histories through two characteristic days of service, we determined that the acceleration factor between the ATC and outdoor service was independent of the variables sampled in this study. This result implies that an ATC test will represent a similar time of outdoor exposure for a wide range of module designs. This is an encouraging result for the standard ATC that must be universally applied across all modules.« less

  6. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    DOE PAGES

    Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...

    2017-11-20

    The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less

  7. Development of building energy asset rating using stock modelling in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Goel, Supriya; Makhmalbaf, Atefe

    2016-01-29

    The US Building Energy Asset Score helps building stakeholders quickly gain insight into the efficiency of building systems (envelope, electrical and mechanical systems). A robust, easy-to-understand 10-point scoring system was developed to facilitate an unbiased comparison of similar building types across the country. The Asset Score does not rely on a database or specific building baselines to establish a rating. Rather, distributions of energy use intensity (EUI) for various building use types were constructed using Latin hypercube sampling and converted to a series of stepped linear scales to score buildings. A score is calculated based on the modelled source EUImore » after adjusting for climate. A web-based scoring tool, which incorporates an analytical engine and a simulation engine, was developed to standardize energy modelling and reduce implementation cost. This paper discusses the methodology used to perform several hundred thousand building simulation runs and develop the scoring scales.« less

  8. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Undisturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-19

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less

  9. Generalized Likelihood Uncertainty Estimation (GLUE) methodology for optimization of extraction in natural products.

    PubMed

    Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H

    2018-06-01

    Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  11. Efficient hybrid evolutionary algorithm for optimization of a strip coiling process

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Park, Won-Woong; Kim, Dong-Kyu; Im, Yong-Taek; Bureerat, Sujin; Kwon, Hyuck-Cheol; Chun, Myung-Sik

    2015-04-01

    This article proposes an efficient metaheuristic based on hybridization of teaching-learning-based optimization and differential evolution for optimization to improve the flatness of a strip during a strip coiling process. Differential evolution operators were integrated into the teaching-learning-based optimization with a Latin hypercube sampling technique for generation of an initial population. The objective function was introduced to reduce axial inhomogeneity of the stress distribution and the maximum compressive stress calculated by Love's elastic solution within the thin strip, which may cause an irregular surface profile of the strip during the strip coiling process. The hybrid optimizer and several well-established evolutionary algorithms (EAs) were used to solve the optimization problem. The comparative studies show that the proposed hybrid algorithm outperformed other EAs in terms of convergence rate and consistency. It was found that the proposed hybrid approach was powerful for process optimization, especially with a large-scale design problem.

  12. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  13. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra

    The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less

  14. Hybrid Airships in Joint Logistics Over the Shore (JLOTS)

    DTIC Science & Technology

    2013-06-13

    invaluable. Additionally, special thanks are extended to my small group and instructors Mr. Thomas Meara, Mr. Leo Verhaeg, Mr. Kenneth Szmed, and Dr...LHS Load Handling System LMSR Large, Medium Speed Roll-on/Roll-Off Ship LOC Lines of Communication LSV Logistics Support Vessel MEB Marine...that connected operations to a source of supply became known as Lines-of- Communication ( LOC ). Increasing the length of a LOC makes an army more

  15. Research into the Architecture of CAD Based Robot Vision Systems

    DTIC Science & Technology

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  16. What Friends Are For: Collaborative Intelligence Analysis and Search

    DTIC Science & Technology

    2014-06-01

    14. SUBJECT TERMS Intelligence Community, information retrieval, recommender systems , search engines, social networks, user profiling, Lucene...improvements over existing search systems . The improvements are shown to be robust to high levels of human error and low similarity between users ...precision NOLH nearly orthogonal Latin hypercubes P@ precision at documents RS recommender systems TREC Text REtrieval Conference USM user

  17. A graph with fractional revival

    NASA Astrophysics Data System (ADS)

    Bernard, Pierre-Antoine; Chan, Ada; Loranger, Érika; Tamon, Christino; Vinet, Luc

    2018-02-01

    An example of a graph that admits balanced fractional revival between antipodes is presented. It is obtained by establishing the correspondence between the quantum walk on a hypercube where the opposite vertices across the diagonals of each face are connected and, the coherent transport of single excitations in the extension of the Krawtchouk spin chain with next-to-nearest neighbour interactions.

  18. Pain Modulation in Waking and Hypnosis in Women: Event-Related Potentials and Sources of Cortical Activity

    PubMed Central

    De Pascalis, Vilfredo; Varriale, Vincenzo; Cacace, Immacolata

    2015-01-01

    Using a strict subject selection procedure, we tested in High and Low Hypnotizable subjects (HHs and LHs) whether treatments of hypoalgesia and hyperalgesia, as compared to a relaxation-control, differentially affected subjective pain ratings and somatosensory event-related potentials (SERPs) during painful electric stimulation. Treatments were administered in waking and hypnosis conditions. LHs showed little differentiation in pain and distress ratings between hypoalgesia and hyperalgesia treatments, whereas HHs showed a greater spread in the instructed direction. HHs had larger prefrontal N140 and P200 waves of the SERPs during hypnotic hyperalgesia as compared to relaxation-control treatment. Importantly, HHs showed significant smaller frontocentral N140 and frontotemporal P200 waves during hypnotic hypoalgesia. LHs did not show significant differences for these SERP waves among treatments in both waking and hypnosis conditions. Source localization (sLORETA) method revealed significant activations of the bilateral primary somatosensory (BA3), middle frontal gyrus (BA6) and anterior cingulate cortices (BA24). Activity of these contralateral regions significantly correlated with subjective numerical pain scores for control treatment in waking condition. Moreover, multivariate regression analyses distinguished the contralateral BA3 as the only region reflecting a stable pattern of pain coding changes across all treatments in waking and hypnosis conditions. More direct testing showed that hypnosis reduced the strength of the association of pain modulation and brain activity changes at BA3. sLORETA in HHs revealed, for the N140 wave, that during hypnotic hyperalgesia, there was an increased activity within medial, supramarginal and superior frontal gyri, and cingulated gyrus (BA32), while for the P200 wave, activity was increased in the superior (BA22), middle (BA37), inferior temporal (BA19) gyri and superior parietal lobule (BA7). Hypnotic hypoalgesia in HHs, for N140 wave, showed reduced activity within medial and superior frontal gyri (BA9,8), paraippocampal gyrus (BA34), and postcentral gyrus (BA1), while for the P200, activity was reduced within middle and superior frontal gyri (BA9 and BA10), anterior cingulate (BA33), cuneus (BA19) and sub-lobar insula (BA13). These findings demonstrate that hypnotic suggestions can exert a top-down modulatory effect on attention/preconscious brain processes involved in pain perception. PMID:26030417

  19. Chemical evolution of Himalayan leucogranites based on an O, U-Pb and Hf study of zircon

    NASA Astrophysics Data System (ADS)

    Hopkinson, Thomas N.; Warren, Clare J.; Harris, Nigel B. W.; Hammond, Samantha J.; Parrish, Randall R.

    2015-04-01

    Crustal melting is a characteristic process at convergent plate margins, where crustal rocks are heated and deformed. Miocene leucogranite sheets and plutons are found intruded into the high-grade metasedimentary core (the Greater Himalayan Sequence, GHS) across the Himalayan orogen. Previously-published Himalayan whole-rock data suggest that these leucogranites formed from a purely meta-sedimentary source, isotopically similar to those into which they now intrude. Bulk rock analyses carry inherent uncertainties, however: they may hide contributions from different contributing sources, and post-crystallization processes such as fluid interaction may significantly alter the original chemistry. In contrast, zircon is more able to retain precise information of the contributing sources of the melt from which it crystallises whilst its resistant nature is impervious to post-magmatic processes. This multi-isotope study of Oligocene-Miocene leucogranite zircons from the Bhutan Himalaya, seeks to differentiate between various geochemical processes that contribute to granite formation. Hf and O isotopes are used to detect discrete changes in melt source while U-Pb isotopes provide the timing of zircon crystallisation. Our data show that zircon rims of Himalayan age yield Hf-O signatures that lie within the previously reported whole-rock GHS field, confirming the absence of a discernible mantle contribution to the leucogranite source. Importantly, we document a decrease in the minimum ɛHf values during Himalayan orogenesis through time, correlating to a change in Hf model age from 1.4 Ga to 2.4 Ga. Nd model ages for the older Lesser Himalayan metasediments (LHS) that underthrust the GHS are significantly older than those for the GHS (2.4-2.9 Ga compared with 1.4-2.2 Ga), and as such even minor contributions of LHS material incorporated into a melt would significantly increase the resulting Hf model age. Hence our leucogranite data suggest either a change of source within the GHS over time, or an increasing contribution from older Lesser Himalayan (LHS) material in the melt. This is the first time that an evolutionary trend in the chemistry of Himalayan crustal melts has been recognized. Thus these new data show that, at least in the Himalaya, accessory phase geochemistry can provide more detailed insight into tectonic processes than bulk rock geochemistry.

  20. The binary white dwarf LHS 3236

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Hugh C.; Dahn, Conard C.; Canzian, Blaise

    2013-12-10

    The white dwarf LHS 3236 (WD1639+153) is shown to be a double-degenerate binary, with each component having a high mass. Astrometry at the U.S. Naval Observatory gives a parallax and distance of 30.86 ± 0.25 pc and a tangential velocity of 98 km s{sup –1}, and reveals binary orbital motion. The orbital parameters are determined from astrometry of the photocenter over more than three orbits of the 4.0 yr period. High-resolution imaging at the Keck Observatory resolves the pair with a separation of 31 and 124 mas at two epochs. Optical and near-IR photometry give a set of possible binarymore » components. Consistency of all data indicates that the binary is a pair of DA stars with temperatures near 8000 and 7400 K and with masses of 0.93 and 0.91 M {sub ☉}; also possible is a DA primary and a helium DC secondary with temperatures near 8800 and 6000 K and with masses of 0.98 and 0.69 M {sub ☉}. In either case, the cooling ages of the stars are ∼3 Gyr and the total ages are <4 Gyr. The combined mass of the binary (1.66-1.84 M {sub ☉}) is well above the Chandrasekhar limit; however, the timescale for coalescence is long.« less

  1. Release behavior and toxicity profiles towards A549 cell lines of ciprofloxacin from its layered zinc hydroxide intercalation compound

    PubMed Central

    2013-01-01

    Background Layered hydroxides salts (LHS), a layered inorganic compound is gaining attention in a wide range of applications, particularly due to its unique anion exchange properties. In this work, layered zinc hydroxide nitrate (LZH), a family member of LHS was intercalated with anionic ciprofloxacin (CFX), a broad spectrum antibiotic via ion exchange in a mixture solution of water:ethanol. Results Powder x-ray diffraction (XRD), Fourier transform infrared (FTIR) and thermogravimetric analysis (TGA) confirmed the drug anions were successfully intercalated in the interlayer space of LZH. Specific surface area of the obtained compound was increased compared to that of the host due to the different pore textures between the two materials. CFX anions were slowly released over 80 hours in phosphate-buffered saline (PBS) solution due to strong interactions that occurred between the intercalated anions and the host lattices. The intercalation compound demonstrated enhanced antiproliferative effects towards A549 cancer cells compared to the toxicity of CFX alone. Conclusions Strong host-guest interactions between the LZH lattice and the CFX anion give rise to a new intercalation compound that demonstrates sustained release mode and enhanced toxicity effects towards A549 cell lines. These findings should serve as foundations towards further developments of the brucite-like host material in drug delivery systems. PMID:23849189

  2. Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager

    DTIC Science & Technology

    2014-03-01

    quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS

  3. Flexible Space-Filling Designs for Complex System Simulations

    DTIC Science & Technology

    2013-06-01

    interior of the experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with...Computer Experiments, Design of Experiments, Genetic Algorithm , Latin Hypercube, Response Surface Methodology, Nearly Orthogonal 15. NUMBER OF PAGES 147...experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with minimal correlations

  4. Different Formulations of the Orthogonal Array Problem and Their Symmetries

    DTIC Science & Technology

    2014-06-19

    t). Note that OAP(k, s, t, λ) is a polytope not an unbounded polyhedron because it can be embedded inside the hypercube [0, λ]m. Theorem 5. The OAP(k...3.1) is a facet since otherwise OAP(k, s, t, λ) would be an unbounded polyhedron . Then there exists Ny ∈ Rm satisfying all but the 28 facet defining

  5. Hypercube Solutions for Conjugate Directions

    DTIC Science & Technology

    1991-12-01

    1800s, Charles Babbage had designed his Difference Engine and proceeded to the more advanced Analytical Engine. These machines were 1 never completed...Consider his motivation. The following example was frequently cited by Charles Babbage (1792-1871) to justify the construction of his first computing...360 LIST OF REFERENCES [1] P. Morrison and E. Morrison, editors. Charles Babbage and His Calculating Engines. Dover Publications, Inc., New

  6. HERMIES-I: a mobile robot for navigation and manipulation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.; Barhen, J.; de Saussure, G.

    1985-01-01

    The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less

  7. Critical and Griffiths-McCoy singularities in quantum Ising spin glasses on d-dimensional hypercubic lattices: A series expansion study.

    PubMed

    Singh, R R P; Young, A P

    2017-08-01

    We study the ±J transverse-field Ising spin-glass model at zero temperature on d-dimensional hypercubic lattices and in the Sherrington-Kirkpatrick (SK) model, by series expansions around the strong-field limit. In the SK model and in high dimensions our calculated critical properties are in excellent agreement with the exact mean-field results, surprisingly even down to dimension d=6, which is below the upper critical dimension of d=8. In contrast, at lower dimensions we find a rich singular behavior consisting of critical and Griffiths-McCoy singularities. The divergence of the equal-time structure factor allows us to locate the critical coupling where the correlation length diverges, implying the onset of a thermodynamic phase transition. We find that the spin-glass susceptibility as well as various power moments of the local susceptibility become singular in the paramagnetic phase before the critical point. Griffiths-McCoy singularities are very strong in two dimensions but decrease rapidly as the dimension increases. We present evidence that high enough powers of the local susceptibility may become singular at the pure-system critical point.

  8. A parallel strategy for implementing real-time expert systems using CLIPS

    NASA Technical Reports Server (NTRS)

    Ilyes, Laszlo A.; Villaseca, F. Eugenio; Delaat, John

    1994-01-01

    As evidenced by current literature, there appears to be a continued interest in the study of real-time expert systems. It is generally recognized that speed of execution is only one consideration when designing an effective real-time expert system. Some other features one must consider are the expert system's ability to perform temporal reasoning, handle interrupts, prioritize data, contend with data uncertainty, and perform context focusing as dictated by the incoming data to the expert system. This paper presents a strategy for implementing a real time expert system on the iPSC/860 hypercube parallel computer using CLIPS. The strategy takes into consideration not only the execution time of the software, but also those features which define a true real-time expert system. The methodology is then demonstrated using a practical implementation of an expert system which performs diagnostics on the Space Shuttle Main Engine (SSME). This particular implementation uses an eight node hypercube to process ten sensor measurements in order to simultaneously diagnose five different failure modes within the SSME. The main program is written in ANSI C and embeds CLIPS to better facilitate and debug the rule based expert system.

  9. Parallel algorithms for quantum chemistry. I. Integral transformations on a hypercube multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteside, R.A.; Binkley, J.S.; Colvin, M.E.

    1987-02-15

    For many years it has been recognized that fundamental physical constraints such as the speed of light will limit the ultimate speed of single processor computers to less than about three billion floating point operations per second (3 GFLOPS). This limitation is becoming increasingly restrictive as commercially available machines are now within an order of magnitude of this asymptotic limit. A natural way to avoid this limit is to harness together many processors to work on a single computational problem. In principle, these parallel processing computers have speeds limited only by the number of processors one chooses to acquire. Themore » usefulness of potentially unlimited processing speed to a computationally intensive field such as quantum chemistry is obvious. If these methods are to be applied to significantly larger chemical systems, parallel schemes will have to be employed. For this reason we have developed distributed-memory algorithms for a number of standard quantum chemical methods. We are currently implementing these on a 32 processor Intel hypercube. In this paper we present our algorithm and benchmark results for one of the bottleneck steps in quantum chemical calculations: the four index integral transformation.« less

  10. Critical and Griffiths-McCoy singularities in quantum Ising spin glasses on d -dimensional hypercubic lattices: A series expansion study

    NASA Astrophysics Data System (ADS)

    Singh, R. R. P.; Young, A. P.

    2017-08-01

    We study the ±J transverse-field Ising spin-glass model at zero temperature on d -dimensional hypercubic lattices and in the Sherrington-Kirkpatrick (SK) model, by series expansions around the strong-field limit. In the SK model and in high dimensions our calculated critical properties are in excellent agreement with the exact mean-field results, surprisingly even down to dimension d =6 , which is below the upper critical dimension of d =8 . In contrast, at lower dimensions we find a rich singular behavior consisting of critical and Griffiths-McCoy singularities. The divergence of the equal-time structure factor allows us to locate the critical coupling where the correlation length diverges, implying the onset of a thermodynamic phase transition. We find that the spin-glass susceptibility as well as various power moments of the local susceptibility become singular in the paramagnetic phase before the critical point. Griffiths-McCoy singularities are very strong in two dimensions but decrease rapidly as the dimension increases. We present evidence that high enough powers of the local susceptibility may become singular at the pure-system critical point.

  11. Spacecraft On-Board Information Extraction Computer (SOBIEC)

    NASA Technical Reports Server (NTRS)

    Eisenman, David; Decaro, Robert E.; Jurasek, David W.

    1994-01-01

    The Jet Propulsion Laboratory is the Technical Monitor on an SBIR Program issued for Irvine Sensors Corporation to develop a highly compact, dual use massively parallel processing node known as SOBIEC. SOBIEC couples 3D memory stacking technology provided by nCUBE. The node contains sufficient network Input/Output to implement up to an order-13 binary hypercube. The benefit of this network, is that it scales linearly as more processors are added, and it is a superset of other commonly used interconnect topologies such as: meshes, rings, toroids, and trees. In this manner, a distributed processing network can be easily devised and supported. The SOBIEC node has sufficient memory for most multi-computer applications, and also supports external memory expansion and DMA interfaces. The SOBIEC node is supported by a mature set of software development tools from nCUBE. The nCUBE operating system (OS) provides configuration and operational support for up to 8000 SOBIEC processors in an order-13 binary hypercube or any subset or partition(s) thereof. The OS is UNIX (USL SVR4) compatible, with C, C++, and FORTRAN compilers readily available. A stand-alone development system is also available to support SOBIEC test and integration.

  12. Hydrothermal Crystal Growth of Lithium Tetraborate and Lithium Gamma-Metaborate

    DTIC Science & Technology

    2014-03-27

    could be atomic nuclei, the center of mass of some complex—those details are immaterial. Both the rectangle and lozenge form potential cross-sections...HR y d L Figure 10. The red lines are the various contours of solution of 9, using a = 10 Bohr radii and the mass of coefficients on the LHS forced to...absorptivity is significantly below that of the transition metals or the actinides [9]. The lithium borate crystals are therefore a strong candidate for

  13. Deriving Einstein-Podolsky-Rosen steering inequalities from the few-body Abner Shimony inequalities

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Meng, Hui-Xian; Jiang, Shu-Han; Xu, Zhen-Peng; Ren, Changliang; Su, Hong-Yi; Chen, Jing-Ling

    2018-04-01

    For the Abner Shimony (AS) inequalities, the simplest unified forms of directions attaining the maximum quantum violation are investigated. Based on these directions, a family of Einstein-Podolsky-Rosen (EPR) steering inequalities is derived from the AS inequalities in a systematic manner. For these inequalities, the local hidden state (LHS) bounds are strictly less than the local hidden variable (LHV) bounds. This means that the EPR steering is a form of quantum nonlocality strictly weaker than Bell nonlocality.

  14. Large Eddy Simulation of Bubbly Ship Wakes

    DTIC Science & Technology

    2005-08-01

    as, [Cm +BI(p)+ DE (u)+D,(u,)] (2.28) aRm, =-[E,+FE )(p) (229O•., L pe•,z+_tpjj.( F.(]-](2.29) where Ci and EP represent the convective terms, Bi is the...discrete operator for the pressure gradient term, DE and D, (FE and FI) are discrete operators for the explicitly treated off diagonal terms and the...Bashforth scheme is employed for all the other terms. The off diagonal viscous terms ( DE ) are treated explicitly in order to simplify the LHS matrix of the

  15. Vision-Based Navigation and Parallel Computing

    DTIC Science & Technology

    1990-08-01

    33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space

  16. Hypercat - Hypercube of Clumpy AGN Tori

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Lopez-Rodriguez, Enrique; Ichikawa, Kohei; Levenson, Nancy; Packham, Christopher C.

    2017-06-01

    Dusty tori surrounding the central engines of Active Galactic Nuclei (AGN) are required by the Unification Paradigm, and are supported by many observations, e.g. variable nuclear absorber (sometimes Compton-thick) in X-rays, reverberation mapping in optical/UV, hot dust emission and SED shapes in NIR/MIR, molecular and cool-dust tori observed with ALMA in sub-mm.While models of AGN torus SEDs have been developed and utilized for a long time, the study of the resolved emission morphology (brightness maps) has so far been under-appreciated, presumably because resolved observations of the central parsec in AGN are only possible very recently. Currently, only NIR+MIR interferometry is capable of resolving the nuclear dust emission (but not of producing images, until MATISSE comes online). Furthermore, MIR interferometry has delivered also puzzling results, e.g. that in some resolved sources the light emanates preferentially from polar directions above the "torus" system, and not from the equatorial plane, where most of the dust is located.We are preparing the release of a panchromatic, fully interpolable hypercube of brightness maps and projected dust images for a large number of CLUMPY torus models (Nenkova+2008), that will help facilitate studies of resolved AGN emission and dust morphologies. Together with the cube we will release a comprehensive set of open-source tools (Python) that will enable researches to work efficiently with this large hypercube:* easy sub-cube selection + memory-mapping (mitigating the too-big-for-RAM problem)* multi-dim image interpolation (get an image at any wavelength & model parameter combination)* simulation of observations with telescopes (compute/provide + apply a PSF) and interferometers (get visibilities)* analyze images with respect to the power contained at all scales and orientations (via 2D steerable wavelets), addressing the seemingly puzzling results mentioned aboveA series of papers is in preparation, aiming at solving the puzzles, and at making predictions about the resolvability of all nearby AGN tori with any combination of current and future instruments (e.g. VLTI+MATISSE, TMT+MICHI, GMT, ELT, JWST, ALMA).

  17. Co-Designing a Collaborative Chronic Care Network (C3N) for Inflammatory Bowel Disease: Development of Methods

    PubMed Central

    Dellal, George; Peterson, Laura E; Provost, Lloyd; Gloor, Peter A; Fore, David Livingstone; Margolis, Peter A

    2018-01-01

    Background Our health care system fails to deliver necessary results, and incremental system improvements will not deliver needed change. Learning health systems (LHSs) are seen as a means to accelerate outcomes, improve care delivery, and further clinical research; yet, few such systems exist. We describe the process of codesigning, with all relevant stakeholders, an approach for creating a collaborative chronic care network (C3N), a peer-produced networked LHS. Objective The objective of this study was to report the methods used, with a diverse group of stakeholders, to translate the idea of a C3N to a set of actionable next steps. Methods The setting was ImproveCareNow, an improvement network for pediatric inflammatory bowel disease. In collaboration with patients and families, clinicians, researchers, social scientists, technologists, and designers, C3N leaders used a modified idealized design process to develop a design for a C3N. Results Over 100 people participated in the design process that resulted in (1) an overall concept design for the ImproveCareNow C3N, (2) a logic model for bringing about this system, and (3) 13 potential innovations likely to increase awareness and agency, make it easier to collect and share information, and to enhance collaboration that could be tested collectively to bring about the C3N. Conclusions We demonstrate methods that resulted in a design that has the potential to transform the chronic care system into an LHS. PMID:29472173

  18. Changes in quality of life into adulthood after very preterm birth and/or very low birth weight in the Netherlands

    PubMed Central

    2013-01-01

    Background It is important to know the impact of Very Preterm (VP) birth or Very Low Birth Weight (VLBW). The purpose of this study is to evaluate changes in Health-Related Quality of Life (HRQoL) of adults born VP or with a VLBW, between age 19 and age 28. Methods The 1983 nationwide Dutch Project On Preterm and Small for gestational age infants (POPS) cohort of 1338 VP (gestational age <32 weeks) or VLBW (<1500 g) infants, was contacted to complete online questionnaires at age 28. In total, 33.8% of eligible participants completed the Health Utilities Index (HUI3), the London Handicap Scale (LHS) and the WHOQoL-BREF. Multiple imputation was applied to correct for missing data and non-response. Results The mean HUI3 and LHS scores did not change significantly from age 19 to age 28. However, after multiple imputation, a significant, though not clinically relevant, increase of 0.02 on the overall HUI3 score was found. The mean HRQoL score measured with the HUI3 increased from 0.83 at age 19 to 0.85 at age 28. The lowest score on the WHOQoL was the psychological domain (74.4). Conclusions Overall, no important changes in HRQoL between age 19 and age 28 were found in the POPS cohort. Psychological and emotional problems stand out, from which recommendation for interventions could be derived. PMID:23531081

  19. Validating Smoking Data From the Veteran’s Affairs Health Factors Dataset, an Electronic Data Source

    PubMed Central

    Brandt, Cynthia A.; Skanderson, Melissa; Justice, Amy C.; Shahrir, Shahida; Butt, Adeel A.; Brown, Sheldon T.; Freiberg, Matthew S.; Gibert, Cynthia L.; Goetz, Matthew Bidwell; Kim, Joon Woo; Pisani, Margaret A.; Rimland, David; Rodriguez-Barradas, Maria C.; Sico, Jason J.; Tindle, Hilary A.; Crothers, Kristina

    2011-01-01

    Introduction: We assessed smoking data from the Veterans Health Administration (VHA) electronic medical record (EMR) Health Factors dataset. Methods: To assess the validity of the EMR Health Factors smoking data, we first created an algorithm to convert text entries into a 3-category smoking variable (never, former, and current). We compared this EMR smoking variable to 2 different sources of patient self-reported smoking survey data: (a) 6,816 HIV-infected and -uninfected participants in the 8-site Veterans Aging Cohort Study (VACS-8) and (b) a subset of 13,689 participants from the national VACS Virtual Cohort (VACS-VC), who also completed the 1999 Large Health Study (LHS) survey. Sensitivity, specificity, and kappa statistics were used to evaluate agreement of EMR Health Factors smoking data with self-report smoking data. Results: For the EMR Health Factors and VACS-8 comparison of current, former, and never smoking categories, the kappa statistic was .66. For EMR Health Factors and VACS-VC/LHS comparison of smoking, the kappa statistic was .61. Conclusions: Based on kappa statistics, agreement between the EMR Health Factors and survey sources is substantial. Identification of current smokers nationally within the VHA can be used in future studies to track smoking status over time, to evaluate smoking interventions, and to adjust for smoking status in research. Our methodology may provide insights for other organizations seeking to use EMR data for accurate determination of smoking status. PMID:21911825

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S. M.; Kim, K. Y.

    Printed circuit heat exchanger (PCHE) is recently considered as a recuperator for the high temperature gas cooled reactor. In this work, the zigzag-channels of a PCHE have been optimized by using three-dimensional Reynolds-Averaged Navier-Stokes (RANS) analysis and response surface approximation (RSA) modeling technique to enhance thermal-hydraulic performance. Shear stress transport turbulence model is used as a turbulence closure. The objective function is defined as a linear combination of the functions related to heat transfer and friction loss of the PCHE, respectively. Three geometric design variables viz., the ratio of the radius of the fillet to hydraulic diameter of the channels,more » the ratio of wavelength to hydraulic diameter of the channels, and the ratio of wave height to hydraulic diameter of the channels, are used for the optimization. Design points are selected through Latin-hypercube sampling. The optimal design is determined through the RSA model which uses RANS derived calculations at the design points. The results show that the optimum shape enhances considerably the thermal-hydraulic performance than a reference shape. (authors)« less

  1. Design optimization of hydraulic turbine draft tube based on CFD and DOE method

    NASA Astrophysics Data System (ADS)

    Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin

    2018-03-01

    In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.

  2. Developing an in silico model of the modulation of base excision repair using methoxyamine for more targeted cancer therapeutics.

    PubMed

    Gurkan-Cavusoglu, Evren; Avadhani, Sriya; Liu, Lili; Kinsella, Timothy J; Loparo, Kenneth A

    2013-04-01

    Base excision repair (BER) is a major DNA repair pathway involved in the processing of exogenous non-bulky base damages from certain classes of cancer chemotherapy drugs as well as ionising radiation (IR). Methoxyamine (MX) is a small molecule chemical inhibitor of BER that is shown to enhance chemotherapy and/or IR cytotoxicity in human cancers. In this study, the authors have analysed the inhibitory effect of MX on the BER pathway kinetics using a computational model of the repair pathway. The inhibitory effect of MX depends on the BER efficiency. The authors have generated variable efficiency groups using different sets of protein concentrations generated by Latin hypercube sampling, and they have clustered simulation results into high, medium and low efficiency repair groups. From analysis of the inhibitory effect of MX on each of the three groups, it is found that the inhibition is most effective for high efficiency BER, and least effective for low efficiency repair.

  3. Boundary conditions, dimensionality, topology and size dependence of the superconducting transition temperature

    NASA Astrophysics Data System (ADS)

    Fink, Herman J.; Haley, Stephen B.; Giuraniuc, Claudiu V.; Kozhevnikov, Vladimir F.; Indekeu, Joseph O.

    2005-11-01

    For various sample geometries (slabs, cylinders, spheres, hypercubes), de Gennes' boundary condition parameter b is used to study its effect upon the transition temperature Tc of a superconductor. For b > 0 the order parameter at the surface is decreased, and as a consequence Tc is reduced, while for b < 0 the order parameter at the surface is increased, thereby enhancing Tc of a specimen in zero magnetic field. Exact solutions, derived by Fink and Haley (Int. J. mod. Phys. B, 17, 2171 (2003)), of the order parameter of a slab of finite thickness as a function of temperature are presented, both for reduced and enhanced transition (nucleation) temperatures. At the nucleation temperature the order parameter approaches zero. This concise review closes with a link established between de Gennes' microscopic boundary condition and the Ginzburg-Landau phenomenological approach, and a discussion of some relevant experiments. For example, applying the boundary condition with b < 0 to tin whiskers elucidates the increase of Tc with strain.

  4. Computer program to minimize prediction error in models from experiments with 16 hypercube points and 0 to 6 center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1982-01-01

    A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.

  5. Software Techniques for Non-Von Neumann Architectures

    DTIC Science & Technology

    1990-01-01

    Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects

  6. On the Local Equivalence Between the Canonical and the Microcanonical Ensembles for Quantum Spin Systems

    NASA Astrophysics Data System (ADS)

    Tasaki, Hal

    2018-06-01

    We study a quantum spin system on the d-dimensional hypercubic lattice Λ with N=L^d sites with periodic boundary conditions. We take an arbitrary translation invariant short-ranged Hamiltonian. For this system, we consider both the canonical ensemble with inverse temperature β _0 and the microcanonical ensemble with the corresponding energy U_N(β _0) . For an arbitrary self-adjoint operator \\hat{A} whose support is contained in a hypercubic block B inside Λ , we prove that the expectation values of \\hat{A} with respect to these two ensembles are close to each other for large N provided that β _0 is sufficiently small and the number of sites in B is o(N^{1/2}) . This establishes the equivalence of ensembles on the level of local states in a large but finite system. The result is essentially that of Brandao and Cramer (here restricted to the case of the canonical and the microcanonical ensembles), but we prove improved estimates in an elementary manner. We also review and prove standard results on the thermodynamic limits of thermodynamic functions and the equivalence of ensembles in terms of thermodynamic functions. The present paper assumes only elementary knowledge on quantum statistical mechanics and quantum spin systems.

  7. Growing a hypercubical output space in a self-organizing feature map.

    PubMed

    Bauer, H U; Villmann, T

    1997-01-01

    Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.

  8. Thermal metamorphism of the Arunachal Himalaya, India: Raman thermometry and thermochronological constraints on the tectono-thermal evolution

    NASA Astrophysics Data System (ADS)

    Mathew, George; De Sarkar, Sharmistha; Pande, Kanchan; Dutta, Suryendu; Ali, Shakir; Rai, Apritam; Netrawali, Shilpa

    2013-09-01

    Determination of the peak thermal condition is vital in order to understand tectono-thermal evolution of the Himalayan belt. The Lesser Himalayan Sequence (LHS) in the Western Arunachal Pradesh, being rich in carbonaceous material (CM), facilitates the determination of peak metamorphic temperature based on Raman spectroscopy of carbonaceous material (RSCM). In this study, we have used RSCM method of Beyssac et al. (J Metamorph Geol 20:859-871, 2002a) and Rahl et al. (Earth Planet Sci Lett 240:339-354, 2005) to estimate the thermal history of LHS and Siwalik foreland from the western Arunachal Pradesh. The study indicates that the temperature of 700-800 °C in the Greater Himalayan Sequence (GHS) decreases to 650-700 °C in the main central thrust zone (MCTZ) and decreases further to <200 °C in the Mio-Pliocene sequence of Siwaliks. The work demonstrates greater reliability of Rahl et al.'s (Earth Planet Sci Lett 240:339-354, 2005) RSCM method for temperatures >600 and <340 °C. We show that the higher and lower zones of Bomdila Gneiss (BG) experienced temperature of ~600 °C and exhumed at different stages along the Bomdila Thrust (BT) and Upper Main Boundary Thrust (U.MBT). Pyrolysis analysis of the CM together with the Fission Track ages from upper Siwaliks corroborates the RSCM thermometry estimate of ~240 °C. The results indicate that the Permian sequence north of Lower MBT was deposited at greater depths (>12 km) than the upper Siwalik sediments to its south at depths <8 km before they were exhumed. The 40Ar/39Ar ages suggest that the upper zones of Se La evolved ~13-15 Ma. The middle zone exhumed at ~11 Ma and lower zone close to ~8 Ma indicating erosional unroofing of the MCT sheet. The footwall of MCTZ cooled between 6 and 8 Ma. Analyses of P-T path imply that LHS between MCT and U.MBT zone falls within the kyanite stability field with near isobaric condition. At higher structural level, the temperatures increase gradually with P-T conditions in the sillimanite stability field. The near isothermal (700-800 °C) condition in the GHS, isobaric condition in the MCTZ together with T-t path evidence of GHS that experienced relatively longer duration of near peak temperatures and rapid cooling towards MCTZ, compares the evolution of GHS and inverted metamorphic gradient closely to channel flow predictions.

  9. Application of Deep Learning Architectures for Accurate and Rapid Detection of Internal Mechanical Damage of Blueberry Using Hyperspectral Transmittance Data.

    PubMed

    Wang, Zhaodi; Hu, Menghan; Zhai, Guangtao

    2018-04-07

    Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit.

  10. Application of Deep Learning Architectures for Accurate and Rapid Detection of Internal Mechanical Damage of Blueberry Using Hyperspectral Transmittance Data

    PubMed Central

    Hu, Menghan; Zhai, Guangtao

    2018-01-01

    Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit. PMID:29642454

  11. A methodology for system-of-systems design in support of the engineering team

    NASA Astrophysics Data System (ADS)

    Ridolfi, G.; Mooij, E.; Cardile, D.; Corpino, S.; Ferrari, G.

    2012-04-01

    Space missions have experienced a trend of increasing complexity in the last decades, resulting in the design of very complex systems formed by many elements and sub-elements working together to meet the requirements. In a classical approach, especially in a company environment, the two steps of design-space exploration and optimization are usually performed by experts inferring on major phenomena, making assumptions and doing some trial-and-error runs on the available mathematical models. This is done especially in the very early design phases where most of the costs are locked-in. With the objective of supporting the engineering team and the decision-makers during the design of complex systems, the authors developed a modelling framework for a particular category of complex, coupled space systems called System-of-Systems. Once modelled, the System-of-Systems is solved using a computationally cheap parametric methodology, named the mixed-hypercube approach, based on the utilization of a particular type of fractional factorial design-of-experiments, and analysis of the results via global sensitivity analysis and response surfaces. As an applicative example, a system-of-systems of a hypothetical human space exploration scenario for the support of a manned lunar base is presented. The results demonstrate that using the mixed-hypercube to sample the design space, an optimal solution is reached with a limited computational effort, providing support to the engineering team and decision makers thanks to sensitivity and robustness information. The analysis of the system-of-systems model that was implemented shows that the logistic support of a human outpost on the Moon for 15 years is still feasible with currently available launcher classes. The results presented in this paper have been obtained in cooperation with Thales Alenia Space—Italy, in the framework of a regional programme called STEPS. STEPS—Sistemi e Tecnologie per l'EsPlorazione Spaziale is a research project co-financed by Piedmont Region and firms and universities of the Piedmont Aerospace District in the ambit of the P.O.R-F.E.S.R. 2007-2013 program.

  12. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  13. Advanced laser stratospheric monitoring systems analyses

    NASA Technical Reports Server (NTRS)

    Larsen, J. C.

    1984-01-01

    This report describes the software support supplied by Systems and Applied Sciences Corporation for the study of Advanced Laser Stratospheric Monitoring Systems Analyses under contract No. NAS1-15806. This report discusses improvements to the Langley spectroscopic data base, development of LHS instrument control software and data analyses and validation software. The effect of diurnal variations on the retrieved concentrations of NO, NO2 and C L O from a space and balloon borne measurement platform are discussed along with the selection of optimum IF channels for sensing stratospheric species from space.

  14. Horizontal and Vertical Structure of Velocity, Potential Vorticity and Energy in the Gulf Stream.

    DTIC Science & Technology

    1985-02-01

    before. Finally, the equation for heat conservation, using standard . - notation, is: T u + w 3 RHS (2-15) at ax ay + where the RHS may include source and...may be rewritten: a o f 0 2 ah 30i .. .iaT + -R2 -+ w2! = RHS . at goz az Under an assumption of negligible mixing (i.e., RHS is small), vertical...Hk( + v.) Kk - 2i + 2 2 --k (k + N - P available potential energy EKE eddy kinetic energy MKE - mean kinetic energy RHS - right hand side LHS -left

  15. Parallel Processing and Scientific Applications

    DTIC Science & Technology

    1992-11-30

    Lattice QCD Calculations on the Connection Machine), SIAM News 24, 1 (May 1991) 5. C. F. Baillie and D. A. Johnston, Crumpling Dynamically Triangulated...hypercubic lattice ; in the second, the surface is randomly triangulated once at the beginning of the simulation; and in the third the random...Sharpe, QCD with Dynamical Wilson Fermions 1I, Phys. Rev. D44, 3272 (1991), 8. R. Gupta and C. F. Baillie, Critical Behavior of the 2D XY Model, Phys

  16. Mapping Flows onto Networks to Optimize Organizational Processes

    DTIC Science & Technology

    2005-01-01

    And G . Porter, “Assessments of Simulated Performance of Alternative Architectures for Command and Control: The Role of Coordination”, Proceedings of...the 1999 Command & Control Research & Technology Symposium, NWC, Newport, RI, June 1999, pp. 123-143. [Iverson95] M. Iverson, F. Ozguner, G . Follen...Technology Symposium, NPS, Monterrey, CA, June, 2002. [Wu88] Min-You Wu, D. Gajski . “A Programming Aid for Hypercube Architectures.” The Journal of Supercomputing, 2(1988), pp. 349-372.

  17. Computer-Access-Code Matrices

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.

    1990-01-01

    Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.

  18. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  19. ITGB5 and AGFG1 variants are associated with severity of airway responsiveness.

    PubMed

    Himes, Blanca E; Qiu, Weiliang; Klanderman, Barbara; Ziniti, John; Senter-Sylvia, Jody; Szefler, Stanley J; Lemanske, Robert F; Zeiger, Robert S; Strunk, Robert C; Martinez, Fernando D; Boushey, Homer; Chinchilli, Vernon M; Israel, Elliot; Mauger, David; Koppelman, Gerard H; Nieuwenhuis, Maartje A E; Postma, Dirkje S; Vonk, Judith M; Rafaels, Nicholas; Hansel, Nadia N; Barnes, Kathleen; Raby, Benjamin; Tantisira, Kelan G; Weiss, Scott T

    2013-08-28

    Airway hyperresponsiveness (AHR), a primary characteristic of asthma, involves increased airway smooth muscle contractility in response to certain exposures. We sought to determine whether common genetic variants were associated with AHR severity. A genome-wide association study (GWAS) of AHR, quantified as the natural log of the dosage of methacholine causing a 20% drop in FEV1, was performed with 994 non-Hispanic white asthmatic subjects from three drug clinical trials: CAMP, CARE, and ACRN. Genotyping was performed on Affymetrix 6.0 arrays, and imputed data based on HapMap Phase 2, was used to measure the association of SNPs with AHR using a linear regression model. Replication of primary findings was attempted in 650 white subjects from DAG, and 3,354 white subjects from LHS. Evidence that the top SNPs were eQTL of their respective genes was sought using expression data available for 419 white CAMP subjects. The top primary GWAS associations were in rs848788 (P-value 7.2E-07) and rs6731443 (P-value 2.5E-06), located within the ITGB5 and AGFG1 genes, respectively. The AGFG1 result replicated at a nominally significant level in one independent population (LHS P-value 0.012), and the SNP had a nominally significant unadjusted P-value (0.0067) for being an eQTL of AGFG1. Based on current knowledge of ITGB5 and AGFG1, our results suggest that variants within these genes may be involved in modulating AHR. Future functional studies are required to confirm that our associations represent true biologically significant findings.

  20. Cohesive and coherent connected speech deficits in mild stroke.

    PubMed

    Barker, Megan S; Young, Breanne; Robinson, Gail A

    2017-05-01

    Spoken language production theories and lesion studies highlight several important prelinguistic conceptual preparation processes involved in the production of cohesive and coherent connected speech. Cohesion and coherence broadly connect sentences with preceding ideas and the overall topic. Broader cognitive mechanisms may mediate these processes. This study aims to investigate (1) whether stroke patients without aphasia exhibit impairments in cohesion and coherence in connected speech, and (2) the role of attention and executive functions in the production of connected speech. Eighteen stroke patients (8 right hemisphere stroke [RHS]; 6 left [LHS]) and 21 healthy controls completed two self-generated narrative tasks to elicit connected speech. A multi-level analysis of within and between-sentence processing ability was conducted. Cohesion and coherence impairments were found in the stroke group, particularly RHS patients, relative to controls. In the whole stroke group, better performance on the Hayling Test of executive function, which taps verbal initiation/suppression, was related to fewer propositional repetitions and global coherence errors. Better performance on attention tasks was related to fewer propositional repetitions, and decreased global coherence errors. In the RHS group, aspects of cohesive and coherent speech were associated with better performance on attention tasks. Better Hayling Test scores were related to more cohesive and coherent speech in RHS patients, and more coherent speech in LHS patients. Thus, we documented connected speech deficits in a heterogeneous stroke group without prominent aphasia. Our results suggest that broader cognitive processes may play a role in producing connected speech at the early conceptual preparation stage. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Human Microbiome and Learning Healthcare Systems: Integrating Research and Precision Medicine for Inflammatory Bowel Disease

    PubMed Central

    Chuong, Kim H.; Mack, David R.; Stintzi, Alain

    2018-01-01

    Abstract Healthcare institutions face widespread challenges of delivering high-quality and cost-effective care, while keeping up with rapid advances in biomedical knowledge and technologies. Moreover, there is increased emphasis on developing personalized or precision medicine targeted to individuals or groups of patients who share a certain biomarker signature. Learning healthcare systems (LHS) have been proposed for integration of research and clinical practice to fill major knowledge gaps, improve care, reduce healthcare costs, and provide precision care. To date, much discussion in this context has focused on the potential of human genomic data, and not yet on human microbiome data. Rapid advances in human microbiome research suggest that profiling of, and interventions on, the human microbiome can provide substantial opportunity for improved diagnosis, therapeutics, risk management, and risk stratification. In this study, we discuss a potential role for microbiome science in LHSs. We first review the key elements of LHSs, and discuss possibilities of Big Data and patient engagement. We then consider potentials and challenges of integrating human microbiome research into clinical practice as part of an LHS. With rapid growth in human microbiome research, patient-specific microbial data will begin to contribute in important ways to precision medicine. Hence, we discuss how patient-specific microbial data can help guide therapeutic decisions and identify novel effective approaches for precision care of inflammatory bowel disease. To the best of our knowledge, this expert analysis makes an original contribution with new insights poised at the emerging intersection of LHSs, microbiome science, and postgenomics medicine. PMID:28282257

  2. Observing the Spectra of MEarth and TRAPPIST Planets with JWST

    NASA Astrophysics Data System (ADS)

    Morley, Caroline; Kreidberg, Laura; Rustamkulov, Zafar; Robinson, Tyler D.; Fortney, Jonathan J.

    2017-10-01

    During the past two years, nine planets close to Earth in radius have been discovered around nearby M dwarfs cooler than 3300 K. These planets include the 7 planets in the TRAPPIST-1 system and two planets discovered by the MEarth survey, GJ 1132b and LHS 1140b (Dittmann et al. 2017; Berta-Thompson et al. 2015; Gillon et al. 2017). These planets are the smallest planets discovered to date that will be amenable to atmospheric characterization with JWST. They span equilibrium temperatures from ˜130 K to >500 K, and radii from 0.7 to 1.43 Earth radii. Some of these planets orbit as distances potentially amenable to surface liquid water, though the actual surface temperatures will depend strongly on the albedo of the planet and the thickness and composition of its atmosphere. The stars they orbit also vary in activity levels, from the quiet LHS 1140b host star to the more active TRAPPIST-1 host star. This set of planets will form the testbed for our first chance to study the diversity of atmospheres around Earth-sized planets. Here, we will present model spectra of these 9 planets, varying the composition and the surface pressure of the atmosphere. We base our elemental compositions on three outcomes of planetary atmosphere evolution in our own solar system: Earth, Titan, and Venus. We calculate the molecular compositions in chemical equilibrium. We present both thermal emission spectra and transmission spectra for each of these objects, and make predictions for the observability of these spectra with different instrument modes with JWST.

  3. A smart Monte Carlo procedure for production costing and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, C.; Stremel, J.

    1996-11-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less

  4. Estimated Under-Five Deaths Associated with Poor-Quality Antimalarials in Sub-Saharan Africa

    PubMed Central

    Renschler, John P.; Walters, Kelsey M.; Newton, Paul N.; Laxminarayan, Ramanan

    2015-01-01

    Many antimalarials sold in sub-Saharan Africa are poor-quality (falsified, substandard, or degraded), and the burden of disease caused by this problem is inadequately quantified. In this article, we estimate the number of under-five deaths caused by ineffective treatment of malaria associated with consumption of poor-quality antimalarials in 39 sub-Saharan countries. Using Latin hypercube sampling our estimates were calculated as the product of the number of private sector antimalarials consumed by malaria-positive children in 2013; the proportion of private sector antimalarials consumed that were of poor-quality; and the case fatality rate (CFR) of under-five malaria-positive children who did not receive appropriate treatment. An estimated 122,350 (interquartile range [IQR]: 91,577–154,736) under-five malaria deaths were associated with consumption of poor-quality antimalarials, representing 3.75% (IQR: 2.81–4.75%) of all under-five deaths in our sample of 39 countries. There is considerable uncertainty surrounding our results because of gaps in data on case fatality rates and prevalence of poor-quality antimalarials. Our analysis highlights the need for further investigation into the distribution of poor-quality antimalarials and the need for stronger surveillance and regulatory efforts to prevent the sale of poor-quality antimalarials. PMID:25897068

  5. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  6. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  7. Optimizing Radiometric Processing and Feature Extraction of Drone Based Hyperspectral Frame Format Imagery for Estimation of Yield Quantity and Quality of a Grass Sward

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Viljanen, N.; Oliveira, R.; Kaivosoja, J.; Niemeläinen, O.; Hakala, T.; Markelin, L.; Nezami, S.; Suomalainen, J.; Honkavaara, E.

    2018-04-01

    Light-weight 2D format hyperspectral imagers operable from unmanned aerial vehicles (UAV) have become common in various remote sensing tasks in recent years. Using these technologies, the area of interest is covered by multiple overlapping hypercubes, in other words multiview hyperspectral photogrammetric imagery, and each object point appears in many, even tens of individual hypercubes. The common practice is to calculate hyperspectral orthomosaics utilizing only the most nadir areas of the images. However, the redundancy of the data gives potential for much more versatile and thorough feature extraction. We investigated various options of extracting spectral features in the grass sward quantity evaluation task. In addition to the various sets of spectral features, we used photogrammetry-based ultra-high density point clouds to extract features describing the canopy 3D structure. Machine learning technique based on the Random Forest algorithm was used to estimate the fresh biomass. Results showed high accuracies for all investigated features sets. The estimation results using multiview data provided approximately 10 % better results than the most nadir orthophotos. The utilization of the photogrammetric 3D features improved estimation accuracy by approximately 40 % compared to approaches where only spectral features were applied. The best estimation RMSE of 239 kg/ha (6.0 %) was obtained with multiview anisotropy corrected data set and the 3D features.

  8. A multi-satellite orbit determination problem in a parallel processing environment

    NASA Technical Reports Server (NTRS)

    Deakyne, M. S.; Anderle, R. J.

    1988-01-01

    The Engineering Orbit Analysis Unit at GE Valley Forge used an Intel Hypercube Parallel Processor to investigate the performance and gain experience of parallel processors with a multi-satellite orbit determination problem. A general study was selected in which major blocks of computation for the multi-satellite orbit computations were used as units to be assigned to the various processors on the Hypercube. Problems encountered or successes achieved in addressing the orbit determination problem would be more likely to be transferable to other parallel processors. The prime objective was to study the algorithm to allow processing of observations later in time than those employed in the state update. Expertise in ephemeris determination was exploited in addressing these problems and the facility used to bring a realism to the study which would highlight the problems which may not otherwise be anticipated. Secondary objectives were to gain experience of a non-trivial problem in a parallel processor environment, to explore the necessary interplay of serial and parallel sections of the algorithm in terms of timing studies, to explore the granularity (coarse vs. fine grain) to discover the granularity limit above which there would be a risk of starvation where the majority of nodes would be idle or under the limit where the overhead associated with splitting the problem may require more work and communication time than is useful.

  9. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk

    2016-06-08

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  10. Excess noise in Pb(1-x)Sn(x)Se semiconductor lasers

    NASA Technical Reports Server (NTRS)

    Harward, C. N.; Sidney, B. D.

    1980-01-01

    The noise characteristics of the TDL were studied for frequencies less than 20 kHz. For heterodyne applications, the high frequency ( 1 MHz) characteristics are also important. Therefore, the high frequency noise characteristics of the TDL were studied as a part of a full TDL characterization program which has been implemented for the improvement of the TDL as a local oscillator in the LHS system. It was observed that all the devices showed similar high frequency noise characteristics even though they were all constructed using different techniques. These common high frequency noise characteristics are reported.

  11. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2016-06-01

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  12. Dr Michaels® product family (also branded as Soratinex®) versus Methylprednisolone aceponate - a comparative study of the effectiveness for the treatment of plaque psoriasis.

    PubMed

    Hercogovấ, J; Fioranelli, M; Gianfaldoni, S; Chokoeva, A A; Tchernev, G; Wollina, U; Tirant, M; Novotny, F; Roccia, M G; Maximov, G K; França, K; Lotti, T

    2016-01-01

    As one of the most common dermatologic chronic-recurrent disease, variable therapeutic options are available today for management of psoriasis. Although topical high potency corticosteroids, alone or in association with salicylic acid or vitamin D analogues, are still considered the best treatment, they do not seem to possess the capability for a long-term control of the disease or prevent recurrences, as their side effects are major contraindications for continuative use. The aim of this study was to investigate whether Dr. Michaels® product family is comparable to methylprednisolone aceponate (MPA) as a viable alternative treatment option for the treatment and management of stable chronic plaque psoriasis. Thirty adults (13 male, 17 female, mean age 40 years) with mild to severe stable chronic plaque psoriasis, were included in the study. Patients were advised to treat the lesions of the two sides of their body (left and right) with two different unknown modalities for 8 weeks; the pack of Dr. Michaels® products on the left side (consisting of a cleansing gel, an ointment and a skin conditioner) and a placebo pack on the right side, consisting of a cleansing gel, methylprednisolone ointment and a placebo conditioner. Assessment was done using the Psoriasis Activity Severity Index (PASI) scores before treatment and after 2, 4, 6 and 8 weeks. The results achieved with the Dr. Michaels® (Soratinex®) product family for the treatment of chronic plaque psoriasis were better than the results achieved with methylprednisolone aceponate (MPA), even though quicker resolution was achieved with the steroid with 45% of patients achieving resolution within 8-10 days in comparison to 5-6 weeks in the Dr. Michaels® (Soratinex®) group. Before therapy, the mean PASI score of the LHS in Dr. Michaels® (Soratinex®) group was 13.8±4.1 SD and 14.2±4.2 SD in the RHS methylprednisolone aceponate (MPA) group. After 8 weeks of treatment 62% of the Dr. Michaels® (Soratinex®) group had achieved resolution whilst in the methylprednisolone aceponate (MPA) group, the figure remained at 45%. The mean PASI score after 8 weeks of treatment was calculated and in the LHS Dr. Michaels® (Soratinex®) group it was 2.8±1.6 SD and 6.8±2.4 SD in the RHS methylprednisolone aceponate group. In the RHS -methylprednisolone aceponate (MPA) group, 22% of patients failed to respond to the treatment in comparison to 6% in the LHS Dr. Michaels® (Soratinex®) group. Based on the results of this study, Dr. Michaels® products are a more effective treatment option, with insignificant side effects, compared to local treatment with methylprednisolone aceponate (MPA).

  13. Implementation of an Object-Oriented Flight Simulator D.C. Electrical System on a Hypercube Architecture

    DTIC Science & Technology

    1991-12-01

    abstract data type is, what an object-oriented design is and how to apply "software engineering" principles to the design of both of them. I owe a great... Program (ASVP), a research and development effort by two aerospace contractors to redesign and implement subsets of two existing flight simulators in...effort addresses how to implement a simulator designed using the SEI OOD Paradigm on a distributed, parallel, multiple instruction, multiple data (MIMD

  14. A Portable Parallel Implementation of the U.S. Navy Layered Ocean Model

    DTIC Science & Technology

    1995-01-01

    Wallcraft, PhD (I.C. 1981) Planning Systems Inc. & P. R. Moore, PhD (Camb. 1971) IC Dept. Math. DR Moore 1° Encontro de Metodos Numericos...Kendall Square, Hypercube, D R Moore 1 ° Encontro de Metodos Numericos para Equacöes de Derivadas Parciais A. J. Wallcraft IC Mathematics...chips: Chips Machine DEC Alpha CrayT3D/E SUN Sparc Fujitsu AP1000 Intel 860 Paragon D R Moore 1° Encontro de Metodos Numericos para Equacöes

  15. On k-ary n-cubes: Theory and applications

    NASA Technical Reports Server (NTRS)

    Mao, Weizhen; Nicol, David M.

    1994-01-01

    Many parallel processing networks can be viewed as graphs called k-ary n-cubes, whose special cases include rings, hypercubes and toruses. In this paper, combinatorial properties of k-ary n-cubes are explored. In particular, the problem of characterizing the subgraph of a given number of nodes with the maximum edge count is studied. These theoretical results are then used to compute a lower bounding function in branch-and-bound partitioning algorithms and to establish the optimality of some irregular partitions.

  16. Synchronizability of random rectangular graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  17. Proceedings of the Image Understanding Workshop (18th) Held in Cambridge, Massachusetts on 6-8 April 1988. Volume 1

    DTIC Science & Technology

    1988-04-01

    hypercube; methods, which work best in structured scenes, and parent/ child operations run in a smaller fixed time independent spatiotemporal energy...rectangle defines a new coordinate sVstem for it, image is constructed under orthographic projection. The child links that is relative to its own...that the child rectangle can shift in the X -- V plane, rectangle in the image. At first, a noiseless image is created relative to the nominal

  18. Overcoming sampling depth variations in the analysis of broadband hyperspectral images of breast tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kho, Esther; de Boer, Lisanne L.; Van de Vijver, Koen K.; Sterenborg, Henricus J. C. M.; Ruers, Theo J. M.

    2017-02-01

    Worldwide, up to 40% of the breast conserving surgeries require additional operations due to positive resection margins. We propose to reduce this percentage by using hyperspectral imaging for resection margin assessment during surgery. Spectral hypercubes were collected from 26 freshly excised breast specimens with a pushbroom camera (900-1700nm). Computer simulations of the penetration depth in breast tissue suggest a strong variation in sampling depth ( 0.5-10 mm) over this wavelength range. This was confirmed with a breast tissue mimicking phantom study. Smaller penetration depths are observed in wavelength regions with high water and/or fat absorption. Consequently, tissue classification based on spectral analysis over the whole wavelength range becomes complicated. This is especially a problem in highly inhomogeneous human tissue. We developed a method, called derivative imaging, which allows accurate tissue analysis, without the impediment of dissimilar sampling volumes. A few assumptions were made based on previous research. First, the spectra acquired with our camera from breast tissue are mainly shaped by fat and water absorption. Second, tumor tissue contains less fat and more water than healthy tissue. Third, scattering slopes of different tissue types are assumed to be alike. In derivative imaging, the derivatives are calculated of wavelengths a few nanometers apart; ensuring similar penetration depths. The wavelength choice determines the accuracy of the method and the resolution. Preliminary results on 3 breast specimens indicate a classification accuracy of 93% when using wavelength regions characterized by water and fat absorption. The sampling depths at these regions are 1mm and 5mm.

  19. Hyperspectral imaging coupled with chemometric analysis for non-invasive differentiation of black pens

    NASA Astrophysics Data System (ADS)

    Chlebda, Damian K.; Majda, Alicja; Łojewski, Tomasz; Łojewska, Joanna

    2016-11-01

    Differentiation of the written text can be performed with a non-invasive and non-contact tool that connects conventional imaging methods with spectroscopy. Hyperspectral imaging (HSI) is a relatively new and rapid analytical technique that can be applied in forensic science disciplines. It allows an image of the sample to be acquired, with full spectral information within every pixel. For this paper, HSI and three statistical methods (hierarchical cluster analysis, principal component analysis, and spectral angle mapper) were used to distinguish between traces of modern black gel pen inks. Non-invasiveness and high efficiency are among the unquestionable advantages of ink differentiation using HSI. It is also less time-consuming than traditional methods such as chromatography. In this study, a set of 45 modern gel pen ink marks deposited on a paper sheet were registered. The spectral characteristics embodied in every pixel were extracted from an image and analysed using statistical methods, externally and directly on the hypercube. As a result, different black gel inks deposited on paper can be distinguished and classified into several groups, in a non-invasive manner.

  20. Three-particle annihilation in a 2D heterostructure revealed through data-hypercubic photoresponse microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gabor, Nathaniel M.

    2017-05-01

    Van de Waals (vdW) heterostructures - which consist of precisely assembled atomically thin electronic materials - exhibit unusual quantum behavior. These quantum materials-by-design are of fundamental interest in basic scientific research and hold tremendous potential in advanced technological applications. Problematically, the fundamental optoelectronic response in these heterostructures is difficult to access using the standard techniques within the traditions of materials science and condensed matter physics. In the standard approach, characterization is based on the measurement of a small amount of one-dimensional data, which is used to gain a precise picture of the material properties of the sample. However, these techniques are fundamentally lacking in describing the complex interdependency of experimental degrees of freedom in vdW heterostructures. In this talk, I will present our recent experiments that utilize a highly data-intensive approach to gain deep understanding of the infrared photoresponse in vdW heterostructure photodetectors. These measurements, which combine state-of-the-art data analytics and measurement design with fundamentally new device structures and experimental parameters, give a clear picture of electron-hole pair interactions at ultrafast time scales.

  1. Parameter identification and optimization of slide guide joint of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Sun, B. B.

    2017-11-01

    The joint surface has an important influence on the performance of CNC machine tools. In order to identify the dynamic parameters of slide guide joint, the parametric finite element model of the joint is established and optimum design method is used based on the finite element simulation and modal test. Then the mode that has the most influence on the dynamics of slip joint is found through harmonic response analysis. Take the frequency of this mode as objective, the sensitivity analysis of the stiffness of each joint surface is carried out using Latin Hypercube Sampling and Monte Carlo Simulation. The result shows that the vertical stiffness of slip joint surface constituted by the bed and the slide plate has the most obvious influence on the structure. Therefore, this stiffness is taken as the optimization variable and the optimal value is obtained through studying the relationship between structural dynamic performance and stiffness. Take the stiffness values before and after optimization into the FEM of machine tool, and it is found that the dynamic performance of the machine tool is improved.

  2. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  3. Rapid prediction of single green coffee bean moisture and lipid content by hyperspectral imaging.

    PubMed

    Caporaso, Nicola; Whitworth, Martin B; Grebby, Stephen; Fisk, Ian D

    2018-06-01

    Hyperspectral imaging (1000-2500 nm) was used for rapid prediction of moisture and total lipid content in intact green coffee beans on a single bean basis. Arabica and Robusta samples from several growing locations were scanned using a "push-broom" system. Hypercubes were segmented to select single beans, and average spectra were measured for each bean. Partial Least Squares regression was used to build quantitative prediction models on single beans (n = 320-350). The models exhibited good performance and acceptable prediction errors of ∼0.28% for moisture and ∼0.89% for lipids. This study represents the first time that HSI-based quantitative prediction models have been developed for coffee, and specifically green coffee beans. In addition, this is the first attempt to build such models using single intact coffee beans. The composition variability between beans was studied, and fat and moisture distribution were visualized within individual coffee beans. This rapid, non-destructive approach could have important applications for research laboratories, breeding programmes, and for rapid screening for industry.

  4. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-22

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less

  5. A Novel Approach for Determining Source–Receptor Relationships in Model Simulations: A Case Study of Black Carbon Transport in Northern Hemisphere Winter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Po-Lun; Gattiker, J. R.; Liu, Xiaohong

    2013-06-27

    A Gaussian process (GP) emulator is applied to quantify the contribution of local and remote emissions of black carbon (BC) on the BC concentrations in different regions using a Latin Hypercube sampling strategy for emission perturbations in the offline version of the Community Atmosphere Model Version 5.1 (CAM5) simulations. The source-receptor relationships are computed based on simulations constrained by a standard free-running CAM5 simulation and the ERA-Interim reanalysis product. The analysis demonstrates that the emulator is capable of retrieving the source-receptor relationships based on a small number of CAM5 simulations. Most regions are found susceptible to their local emissions. Themore » emulator also finds that the source-receptor relationships retrieved from the model-driven and the reanalysis-driven simulations are very similar, suggesting that the simulated circulation in CAM5 resembles the assimilated meteorology in ERA-Interim. The robustness of the results provides confidence for applying the emulator to detect dose-response signals in the climate system.« less

  6. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  7. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  8. High throughput determination of cleaning solutions to prevent the fouling of an anion exchange resin.

    PubMed

    Elich, Thomas; Iskra, Timothy; Daniels, William; Morrison, Christopher J

    2016-06-01

    Effective cleaning of chromatography resin is required to prevent fouling and maximize the number of processing cycles which can be achieved. Optimization of resin cleaning procedures, however, can lead to prohibitive material, labor, and time requirements, even when using milliliter scale chromatography columns. In this work, high throughput (HT) techniques were used to evaluate cleaning agents for a monoclonal antibody (mAb) polishing step utilizing Fractogel(®) EMD TMAE HiCap (M) anion exchange (AEX) resin. For this particular mAb feed stream, the AEX resin could not be fully restored with traditional NaCl and NaOH cleaning solutions, resulting in a loss of impurity capacity with resin cycling. Miniaturized microliter scale chromatography columns and an automated liquid handling system (LHS) were employed to evaluate various experimental cleaning conditions. Cleaning agents were monitored for their ability to maintain resin impurity capacity over multiple processing cycles by analyzing the flowthrough material for turbidity and high molecular weight (HMW) content. HT experiments indicated that a 167 mM acetic acid strip solution followed by a 0.5 M NaOH, 2 M NaCl sanitization provided approximately 90% cleaning improvement over solutions containing solely NaCl and/or NaOH. Results from the microliter scale HT experiments were confirmed in subsequent evaluations at the milliliter scale. These results identify cleaning agents which may restore resin performance for applications involving fouling species in ion exchange systems. In addition, this work demonstrates the use of miniaturized columns operated with an automated LHS for HT evaluation of chromatographic cleaning procedures, effectively decreasing material requirements while simultaneously increasing throughput. Biotechnol. Bioeng. 2016;113: 1251-1259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  9. Neonatal treatment philosophy in Dutch and German NICUs: health-related quality of life in adulthood of VP/VLBW infants.

    PubMed

    Breeman, Linda D; van der Pal, Sylvia; Verrips, Gijsbert H W; Baumann, Nicole; Bartmann, Peter; Wolke, Dieter

    2017-04-01

    Although survival after very preterm birth (VP)/very low birth weight (VLBW) has improved, a significant number of VP/VLBW individuals develop physical and cognitive problems during their life course that may affect their health-related quality of life (HRQoL). We compared HRQoL in VP/VLBW cohorts from two countries: The Netherlands (n = 314) versus Germany (n = 260) and examined whether different neonatal treatment and rates of disability affect HRQoL in adulthood. To analyse whether cohorts differed in adult HRQoL, linear regression analyses were performed for three HRQoL outcomes assessed with the Health Utilities Index 3 (HUI3), the London Handicap Scale (LHS), and the WHO Quality of Life instrument (WHOQOL-BREF). Stepwise hierarchical linear regression was used to test whether neonatal physical health and treatment, social environment, and intelligence (IQ) were related to VP/VLBW adults' HRQoL and cohort differences. Dutch VP/VLBW adults reported a significantly higher HRQoL on all three general HRQoL measures than German VP/VLBW adults (HUI3: .86 vs .83, p = .036; LHS: .93 vs. .90, p = .018; WHOQOL-BREF: 82.8 vs. 78.3, p < .001). Main predictor of cohort differences in all three HRQoL measures was adult IQ (p < .001). Lower HRQoL in German versus Dutch adults was related to more cognitive impairment in German adults. Due to different policies, German VP/VLBW infants received more intensive treatment that may have affected their cognitive development. Our findings stress the importance of examining effects of different neonatal treatment policies for VP/VLBW adults' life.

  10. Reducing premature KCC2 expression rescues seizure susceptibility and spine morphology in atypical febrile seizures.

    PubMed

    Awad, Patricia N; Sanon, Nathalie T; Chattopadhyaya, Bidisha; Carriço, Josianne Nunes; Ouardouz, Mohamed; Gagné, Jonathan; Duss, Sandra; Wolf, Daniele; Desgent, Sébastien; Cancedda, Laura; Carmant, Lionel; Di Cristo, Graziella

    2016-07-01

    Atypical febrile seizures are considered a risk factor for epilepsy onset and cognitive impairments later in life. Patients with temporal lobe epilepsy and a history of atypical febrile seizures often carry a cortical malformation. This association has led to the hypothesis that the presence of a cortical dysplasia exacerbates febrile seizures in infancy, in turn increasing the risk for neurological sequelae. The mechanisms linking these events are currently poorly understood. Potassium-chloride cotransporter KCC2 affects several aspects of neuronal circuit development and function, by modulating GABAergic transmission and excitatory synapse formation. Recent data suggest that KCC2 downregulation contributes to seizure generation in the epileptic adult brain, but its role in the developing brain is still controversial. In a rodent model of atypical febrile seizures, combining a cortical dysplasia and hyperthermia-induced seizures (LHS rats), we found a premature and sustained increase in KCC2 protein levels, accompanied by a negative shift of the reversal potential of GABA. In parallel, we observed a significant reduction in dendritic spine size and mEPSC amplitude in CA1 pyramidal neurons, accompanied by spatial memory deficits. To investigate whether KCC2 premature overexpression plays a role in seizure susceptibility and synaptic alterations, we reduced KCC2 expression selectively in hippocampal pyramidal neurons by in utero electroporation of shRNA. Remarkably, KCC2 shRNA-electroporated LHS rats show reduced hyperthermia-induced seizure susceptibility, while dendritic spine size deficits were rescued. Our findings demonstrate that KCC2 overexpression in a compromised developing brain increases febrile seizure susceptibility and contribute to dendritic spine alterations. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Pediatric-Collaborative Health Outcomes Information Registry (Peds-CHOIR): A Learning Health System to Guide Pediatric Pain Research and Treatment

    PubMed Central

    Bhandari, Rashmi P.; Feinstein, Amanda B.; Huestis, Samantha E.; Krane, Elliot J.; Dunn, Ashley L.; Cohen, Lindsey L.; Kao, Ming C.; Darnall, Beth D.; Mackey, Sean C.

    2016-01-01

    The pediatric adaptation of the Collaborative Health Outcomes Information Registry (Peds-CHOIR) is a free, open source, flexible learning health care system (LHS) that meets the call by the Institute of Medicine (IOM) for the development of national registries to guide research and precision pain medicine. This report is a technical account of the first application of Peds-CHOIR with three aims: to 1) describe the design and implementation process of the LHS; 2) highlight how the clinical system concurrently cultivates a research platform rich in breadth (e.g., clinic characteristics) and depth (e.g., unique patient and caregiver reporting patterns); and 3) demonstrate the utility of capturing patient-caregiver dyad data in real time, with dynamic outcomes tracking that informs clinical decisions and delivery of treatments. Technical, financial, and systems-based considerations of Peds-CHOIR are discussed. Cross-sectional, retrospective data from patients with chronic pain (N = 352; 8 – 17 years; M = 13.9 years) and their caregivers are reported, including NIH Patient Reported Outcomes Measurement Information System (PROMIS) domains (mobility, pain interference, fatigue, peer relations, anxiety and depression) and the Pain Catastrophizing Scale. Consistent with the literature, analyses of initial visits revealed impairments across physical, psychological and social domains. Patients and caregivers evidenced agreement in observable variables (mobility); however, caregivers consistently endorsed greater impairment regarding internal experiences (pain interference, fatigue, peer relations, anxiety, depression) than patients’ self-report. A platform like Peds-CHOIR highlights predictors of chronic pain outcomes on a group level and facilitates individually tailored treatment(s). Challenges of implementation and future directions are discussed. PMID:27280328

  12. Understanding of impurity poloidal distribution in the edge pedestal by modelling

    NASA Astrophysics Data System (ADS)

    Rozhansky, V.; Kaveeva, E.; Molchanov, P.; Veselova, I.; Voskoboynikov, S.; Coster, D.; Fable, E.; Puetterich, T.; Viezzer, E.; Kukushkin, A. S.; Kirk, A.; the ASDEX Upgrade Team

    2015-07-01

    Simulation of an H-mode ASDEX Upgrade shot with boron impurity was done with the B2SOLPS5.2 transport code. Simulation results were compared with the unique experimental data available for the chosen shot: radial density, electron and ion temperature profiles in the equatorial midplanes, radial electric field profile, radial profiles of the parallel velocity of impurities at the low-field side (LFS) and high-field side (HFS), radial density profiles of impurity ions at LHS and HFS. Simulation results reproduce all available experimental data simultaneously. In particular strong poloidal HFS-LFS asymmetry of B5+ ions was predicted in accordance with the experiment. The simulated HFS B5+ density inside the edge transport barrier is twice larger than that at LFS. This is consistent with the experimental observations where even larger impurity density asymmetry was observed. A similar effect was predicted in the simulation done for the MAST H-mode. Here the HFS density of He2+ is predicted to be 4 times larger than that at LHS. Such a large predicted asymmetry is connected with a larger ratio of HFS and LFS magnetic fields which is typical for spherical tokamaks. The HFS/LFS asymmetry was not measured in the experiment, however modelling qualitatively reproduces the observed change of sign of He+parallel velocity to the counter-current direction at LFS. The understanding of the asymmetry is based on neoclassical effects in plasma with strong gradients. It is demonstrated that simulation results obtained with account of sources of ionization, realistic geometry and turbulent transport are consistent with the simplified analytical approach. Difference from the standard neoclassical theory is emphasized.

  13. Observing the Atmospheres of Known Temperate Earth-sized Planets with JWST

    NASA Astrophysics Data System (ADS)

    Morley, Caroline V.; Kreidberg, Laura; Rustamkulov, Zafar; Robinson, Tyler; Fortney, Jonathan J.

    2017-12-01

    Nine transiting Earth-sized planets have recently been discovered around nearby late-M dwarfs, including the TRAPPIST-1 planets and two planets discovered by the MEarth survey, GJ 1132b and LHS 1140b. These planets are the smallest known planets that may have atmospheres amenable to detection with the James Webb Space Telescope (JWST). We present model thermal emission and transmission spectra for each planet, varying composition and surface pressure of the atmosphere. We base elemental compositions on those of Earth, Titan, and Venus and calculate the molecular compositions assuming chemical equilibrium, which can strongly depend on temperature. Both thermal emission and transmission spectra are sensitive to the atmospheric composition; thermal emission spectra are sensitive to surface pressure and temperature. We predict the observability of each planet’s atmosphere with JWST. GJ 1132b and TRAPPIST-1b are excellent targets for emission spectroscopy with JWST/MIRI, requiring fewer than 10 eclipse observations. Emission photometry for TRAPPIST-1c requires 5-15 eclipses; LHS 1140b and TRAPPIST-1d, TRAPPIST-1e, and TRAPPIST-1f, which could possibly have surface liquid water, may be accessible with photometry. Seven of the nine planets are strong candidates for transmission spectroscopy measurements with JWST, although the number of transits required depends strongly on the planets’ actual masses. Using the measured masses, fewer than 20 transits are required for a 5σ detection of spectral features for GJ 1132b and six of the TRAPPIST-1 planets. Dedicated campaigns to measure the atmospheres of these nine planets will allow us, for the first time, to probe formation and evolution processes of terrestrial planetary atmospheres beyond our solar system.

  14. Direct Test of the Brown Dwarf Evolutionary Models Through Secondary Eclipse Spectroscopy of LHS 6343

    NASA Astrophysics Data System (ADS)

    Albert, Loic

    2015-10-01

    As the number of field Brown Dwarfs counts in the thousands, interpreting their physical parameters (mass, temperature, radius, luminosity, age, metallicity) relies as heavily as ever on atmosphere and evolutionary models. Fortunately, models are largely successful in explaining observations (colors, spectral types, luminosity), so they appear well calibrated in a relative sense. However, an absolute model-independent calibration is still lacking. Eclipsing BDs systems are a unique laboratory in this respect but until recently only one such system was known, 2M0535-05 - a very young (<3 Myr) binary Brown Dwarfs showing a peculiar temperature reversal (Stassun et al. 2006). Due to its young age, 2M0535-05 is an ill-suited test for Gyr-old field Brown Dwarfs whose population is by far the most common in the solar neighborhood. Recently, a second system - an evolved BD (>1 Gyr) - was identified (62.1+/-1.2 MJup, 0.783+/-0.011 RJup) transiting LHS6343 with a 12.7-day period. We propose to use WFC3 in drift scan mode and 5 HST orbits to determine the spectral type (a proxy for temperature) as well as the near-infrared luminosity of this brown dwarf. We conducted simulations that predict a signal-to-noise ratio ranging between 10 and 30 per resolution element in the peaks of the spectrum. These measurements, coupled with existing luminosity measurements with Spitzer at 3.6 and 4.5 microns, will allow us to trace the spectral energy distribution of the Brown Dwarf and directly calculate its blackbody temperature. It will be the first field Brown Dwarfs with simultaneous measurements of its radius, mass, luminosity and temperature all measured independently of models.

  15. Tensor form factor for the D → π(K) transitions with Twisted Mass fermions.

    NASA Astrophysics Data System (ADS)

    Lubicz, Vittorio; Riggio, Lorenzo; Salerno, Giorgio; Simula, Silvano; Tarantino, Cecilia

    2018-03-01

    We present a preliminary lattice calculation of the D → π and D → K tensor form factors fT (q2) as a function of the squared 4-momentum transfer q2. ETMC recently computed the vector and scalar form factors f+(q2) and f0(q2) describing D → π(K)lv semileptonic decays analyzing the vector current and the scalar density. The study of the weak tensor current, which is directly related to the tensor form factor, completes the set of hadronic matrix element regulating the transition between these two pseudoscalar mesons within and beyond the Standard Model where a non-zero tensor coupling is possible. Our analysis is based on the gauge configurations produced by the European Twisted Mass Collaboration with Nf = 2 + 1 + 1 flavors of dynamical quarks. We simulated at three different values of the lattice spacing and with pion masses as small as 210 MeV and with the valence heavy quark in the mass range from ≃ 0.7 mc to ≃ 1.2mc. The matrix element of the tensor current are determined for a plethora of kinematical conditions in which parent and child mesons are either moving or at rest. As for the vector and scalar form factors, Lorentz symmetry breaking due to hypercubic effects is clearly observed in the data. We will present preliminary results on the removal of such hypercubic lattice effects.

  16. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  17. Genome-scale fluxes predicted under the guidance of enzyme abundance using a novel hyper-cube shrink algorithm.

    PubMed

    Xie, Zhengwei; Zhang, Tianyu; Ouyang, Qi

    2018-02-01

    One of the long-expected goals of genome-scale metabolic modelling is to evaluate the influence of the perturbed enzymes on flux distribution. Both ordinary differential equation (ODE) models and constraint-based models, like Flux balance analysis (FBA), lack the capacity to perform metabolic control analysis (MCA) for large-scale networks. In this study, we developed a hyper-cube shrink algorithm (HCSA) to incorporate the enzymatic properties into the FBA model by introducing a pseudo reaction V constrained by enzymatic parameters. Our algorithm uses the enzymatic information quantitatively rather than qualitatively. We first demonstrate the concept by applying HCSA to a simple three-node network, whereby we obtained a good correlation between flux and enzyme abundance. We then validate its prediction by comparison with ODE and with a synthetic network producing voilacein and analogues in Saccharomyces cerevisiae. We show that HCSA can mimic the state-state results of ODE. Finally, we show its capability of predicting the flux distribution in genome-scale networks by applying it to sporulation in yeast. We show the ability of HCSA to operate without biomass flux and perform MCA to determine rate-limiting reactions. Algorithm was implemented by Matlab and C ++. The code is available at https://github.com/kekegg/HCSA. xiezhengwei@hsc.pku.edu.cn or qi@pku.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. Modeling Semantic Emotion Space Using a 3D Hypercube-Projection: An Innovative Analytical Approach for the Psychology of Emotions

    PubMed Central

    Trnka, Radek; Lačev, Alek; Balcar, Karel; Kuška, Martin; Tavel, Peter

    2016-01-01

    The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model. PMID:27148130

  19. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, G. H.

    1985-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.

  20. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

Top