Sample records for obtain realistic estimates

  1. Spline Laplacian estimate of EEG potentials over a realistic magnetic resonance-constructed scalp surface model.

    PubMed

    Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A

    1996-04-01

    This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.

  2. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    DTIC Science & Technology

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4

  3. Information content of slug tests for estimating hydraulic properties in realistic, high-conductivity aquifer scenarios

    NASA Astrophysics Data System (ADS)

    Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya

    2011-06-01

    SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a

  4. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  5. Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario

    NASA Astrophysics Data System (ADS)

    Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.

    1997-06-01

    In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.

  6. Using coronal seismology to estimate the magnetic field strength in a realistic coronal model

    NASA Astrophysics Data System (ADS)

    Chen, F.; Peter, H.

    2015-09-01

    Aims: Coronal seismology is used extensively to estimate properties of the corona, e.g. the coronal magnetic field strength is derived from oscillations observed in coronal loops. We present a three-dimensional coronal simulation, including a realistic energy balance in which we observe oscillations of a loop in synthesised coronal emission. We use these results to test the inversions based on coronal seismology. Methods: From the simulation of the corona above an active region, we synthesise extreme ultraviolet emission from the model corona. From this, we derive maps of line intensity and Doppler shift providing synthetic data in the same format as obtained from observations. We fit the (Doppler) oscillation of the loop in the same fashion as done for observations to derive the oscillation period and damping time. Results: The loop oscillation seen in our model is similar to imaging and spectroscopic observations of the Sun. The velocity disturbance of the kink oscillation shows an oscillation period of 52.5 s and a damping time of 125 s, which are both consistent with the ranges of periods and damping times found in observations. Using standard coronal seismology techniques, we find an average magnetic field strength of Bkink = 79 G for our loop in the simulation, while in the loop the field strength drops from roughly 300 G at the coronal base to 50 G at the apex. Using the data from our simulation, we can infer what the average magnetic field derived from coronal seismology actually means. It is close to the magnetic field strength in a constant cross-section flux tube, which would give the same wave travel time through the loop. Conclusions: Our model produced a realistic looking loop-dominated corona, and provides realistic information on the oscillation properties that can be used to calibrate and better understand the result from coronal seismology. A movie associated with Fig. 1 is available in electronic form at http://www.aanda.org

  7. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  8. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  9. Batch Effect Confounding Leads to Strong Bias in Performance Estimates Obtained by Cross-Validation

    PubMed Central

    Delorenzi, Mauro

    2014-01-01

    Background With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences (“batch effects”) as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. Focus The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. Data We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., ‘control’) or group 2 (e.g., ‘treated’). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. Methods We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data. PMID:24967636

  10. Analysis of short pulse laser altimetry data obtained over horizontal path

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Tsai, B. M.; Gardner, C. S.

    1983-01-01

    Recent pulsed measurements of atmospheric delay obtained by ranging to the more realistic targets including a simulated ocean target and an extended plate target are discussed. These measurements are used to estimate the expected timing accuracy of a correlation receiver system. The experimental work was conducted using a pulsed two color laser altimeter.

  11. Is realistic neuronal modeling realistic?

    PubMed Central

    Almog, Mara

    2016-01-01

    Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. PMID:27535372

  12. Mathematical modeling in realistic mathematics education

    NASA Astrophysics Data System (ADS)

    Riyanto, B.; Zulkardi; Putri, R. I. I.; Darmawijoyo

    2017-12-01

    The purpose of this paper is to produce Mathematical modelling in Realistics Mathematics Education of Junior High School. This study used development research consisting of 3 stages, namely analysis, design and evaluation. The success criteria of this study were obtained in the form of local instruction theory for school mathematical modelling learning which was valid and practical for students. The data were analyzed using descriptive analysis method as follows: (1) walk through, analysis based on the expert comments in the expert review to get Hypothetical Learning Trajectory for valid mathematical modelling learning; (2) analyzing the results of the review in one to one and small group to gain practicality. Based on the expert validation and students’ opinion and answers, the obtained mathematical modeling problem in Realistics Mathematics Education was valid and practical.

  13. Generation of realistic scene using illuminant estimation and mixed chromatic adaptation

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chul; Hong, Sang-Gi; Kim, Dong-Ho; Park, Jong-Hyun

    2003-12-01

    The algorithm of combining a real image with a virtual model was proposed to increase the reality of synthesized images. Currently, synthesizing a real image with a virtual model facilitated the surface reflection model and various geometric techniques. In the current methods, the characteristics of various illuminants in the real image are not sufficiently considered. In addition, despite the chromatic adaptation plays a vital role for accommodating different illuminants in the two media viewing conditions, it is not taken into account in the existing methods. Thus, it is hardly to get high-quality synthesized images. In this paper, we proposed the two-phase image synthesis algorithm. First, the surface reflectance of the maximum high-light region (MHR) was estimated using the three eigenvectors obtained from the principal component analysis (PCA) applied to the surface reflectances of 1269 Munsell samples. The combined spectral value, i.e., the product of surface reflectance and the spectral power distributions (SPDs) of an illuminant, of MHR was then estimated using the three eigenvectors obtained from PCA applied to the products of surface reflectances of Munsell 1269 samples and the SPDs of four CIE Standard Illuminants (A, C, D50, D65). By dividing the average combined spectral values of MHR by the average surface reflectances of MHR, we could estimate the illuminant of a real image. Second, the mixed chromatic adaptation (S-LMS) using an estimated and an external illuminants was applied to the virtual-model image. For evaluating the proposed algorithm, experiments with synthetic and real scenes were performed. It was shown that the proposed method was effective in synthesizing the real and the virtual scenes under various illuminants.

  14. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  15. Self-adaptive signals separation for non-contact heart rate estimation from facial video in realistic environments.

    PubMed

    Liu, Xuenan; Yang, Xuezhi; Jin, Jing; Li, Jiangshan

    2018-06-05

    Recent researches indicate that facial epidermis color varies with the rhythm of heat beats. It can be captured by consumer-level cameras and, astonishingly, be adopted to estimate heart rate (HR). The HR estimated remains not as precise as required in practical environment where illumination interference, facial expressions, or motion artifacts are involved, though numerous methods have been proposed in the last few years. A novel algorithm is proposed to make non-contact HR estimation technique more robust. First, the face of subject is detected and tracked to follow the head movement. The facial region then falls into several blocks, and the chrominance feature of each block is extracted to establish raw HR sub-signal. Self-adaptive signals separation (SASS) is performed to separate the noiseless HR sub-signals from raw sub-signals. On that basis, the noiseless sub-signals full of HR information are selected using weight-based scheme to establish the holistic HR signal, from which average HR is computed adopting wavelet transform and data filter. Forty subjects take part in our experiments, whose facial videos are recorded by a normal webcam with the frame rate of 30 fps under ambient lighting conditions. The average HR estimated by our method correlates strongly with ground truth measurements, as indicated in experimental results measured in static scenario with the Pearson's correlation r=0.980 and dynamic scenario with the Pearson's correlation r=0.897. Our method, compared to the newest method, decreases the error rate by 38.63% and increases the Pearson's correlation by 15.59%, indicating that our method evidently outperforms state-of-the-art non-contact HR estimation methods in realistic environments. © 2018 Institute of Physics and Engineering in Medicine.

  16. Survey of Approaches to Generate Realistic Synthetic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Lee, Sangkeun; Powers, Sarah S

    A graph is a flexible data structure that can represent relationships between entities. As with other data analysis tasks, the use of realistic graphs is critical to obtaining valid research results. Unfortunately, using the actual ("real-world") graphs for research and new algorithm development is difficult due to the presence of sensitive information in the data or due to the scale of data. This results in practitioners developing algorithms and systems that employ synthetic graphs instead of real-world graphs. Generating realistic synthetic graphs that provide reliable statistical confidence to algorithmic analysis and system evaluation involves addressing technical hurdles in a broadmore » set of areas. This report surveys the state of the art in approaches to generate realistic graphs that are derived from fitted graph models on real-world graphs.« less

  17. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  18. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  19. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  20. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  1. Radiation Damage to Nervous System: Designing Optimal Models for Realistic Neuron Morphology in Hippocampus

    NASA Astrophysics Data System (ADS)

    Batmunkh, Munkhbaatar; Bugay, Alexander; Bayarchimeg, Lkhagvaa; Lkhagva, Oidov

    2018-02-01

    The present study is focused on the development of optimal models of neuron morphology for Monte Carlo microdosimetry simulations of initial radiation-induced events of heavy charged particles in the specific types of cells of the hippocampus, which is the most radiation-sensitive structure of the central nervous system. The neuron geometry and particles track structures were simulated by the Geant4/Geant4-DNA Monte Carlo toolkits. The calculations were made for beams of protons and heavy ions with different energies and doses corresponding to real fluxes of galactic cosmic rays. A simple compartmental model and a complex model with realistic morphology extracted from experimental data were constructed and compared. We estimated the distribution of the energy deposition events and the production of reactive chemical species within the developed models of CA3/CA1 pyramidal neurons and DG granule cells of the rat hippocampus under exposure to different particles with the same dose. Similar distributions of the energy deposition events and concentration of some oxidative radical species were obtained in both the simplified and realistic neuron models.

  2. Low cost digester monitoring under realistic conditions: Rural use of biogas and digestate quality.

    PubMed

    Castro, L; Escalante, H; Jaimes-Estévez, J; Díaz, L J; Vecino, K; Rojas, G; Mantilla, L

    2017-09-01

    The purpose of this work was to assess the behaviour of anaerobic digestion of cattle manure in a rural digester under realistic conditions, and estimate the quality and properties of the digestate. The data obtained during monitoring indicated that the digester operation was stable without risk of inhibition. It produced an average of 0.85Nm 3 biogas/d at 65.6% methane, providing an energy savings of 76%. In addition, the digestate contained high nutrient concentrations, which is an important feature of fertilizers. However, this method requires post-treatment due to the presence of pathogens. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Using a Realist Research Methodology in Policy Analysis

    ERIC Educational Resources Information Center

    Lourie, Megan; Rata, Elizabeth

    2017-01-01

    The article describes the usefulness of a realist methodology in linking sociological theory to empirically obtained data through the development of a methodological device. Three layers of analysis were integrated: 1. the findings from a case study about Maori language education in New Zealand; 2. the identification and analysis of contradictions…

  4. CORKSCREW 2013 CORK study of children's realistic estimation of weight.

    PubMed

    Skrobo, Darko; Kelleher, Gemma

    2015-01-01

    In a resuscitation situation involving a child (age 1-15 years) it is crucial to obtain a weight as most interventions and management depend on it. The APLS formula, '2×(age+4)', is taught via the APLS course and is widely used in Irish hospitals. As the prevalence of obesity is increasing the accuracy of the formula has been questioned and a newer formula has been suggested, the Luscombe and Owens (LO) formula, '(3×age)+7'. To gather data on the weights and ages of the Cork paediatric population (ages 1-15 years) attending services at the Cork University Hospital (CUH), and to identify which of the two age-based weight estimation formulae has best diagnostic accuracy. CUH, Ireland's only level one trauma centre. Retrospective data collection from charts in the Emergency Department, Paediatric Assessment Unit and the Paediatric wards of CUH. 3155 children aged 1-15 years were included in the study. There were 1344 girls and 1811 boys. The formula weight='2×(age+4)' underestimated children's weights by a mean of 20.3% (95% CI 19.7% to 20.9%) for the ages of 1-15 years. The LO formula weight='(3×age)+7' showed a mean underestimation of 4.0% (95% CI 3.3% to 4.6%) for the same age range. The LO formula has been validated in several studies and proven to be a superior age-based weight estimation formula in many western emergency departments. This study shows that the LO formula leads to less underestimation of weights in Irish children than the APLS formula. It is a simple, safe and more accurate age-based estimation formula that can be used over a large age range (1-15 years). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  6. Developing a realistic-prototyping road user cost evaluation tool for FDOT.

    DOT National Transportation Integrated Search

    2008-12-31

    The objective of this project is to develop a realistic-prototyping RUC (Road User Cost) calculation tool that is userfriendly : and utilizing limited number of data inputs that are easy to use. The tool can help engineers to estimate RUC on : specif...

  7. I-Love relations for incompressible stars and realistic stars

    NASA Astrophysics Data System (ADS)

    Chan, T. K.; Chan, AtMa P. O.; Leung, P. T.

    2015-02-01

    In spite of the diversity in the equations of state of nuclear matter, the recently discovered I-Love-Q relations [Yagi and Yunes, Science 341, 365 (2013), 10.1126/science.1236462], which relate the moment of inertia, tidal Love number (deformability), and the spin-induced quadrupole moment of compact stars, hold for various kinds of realistic neutron stars and quark stars. While the physical origin of such universality is still a current issue, the observation that the I-Love-Q relations of incompressible stars can well approximate those of realistic compact stars hints at a new direction to approach the problem. In this paper, by establishing recursive post-Minkowskian expansion for the moment of inertia and the tidal deformability of incompressible stars, we analytically derive the I-Love relation for incompressible stars and show that the so-obtained formula can be used to accurately predict the behavior of realistic compact stars from the Newtonian limit to the maximum mass limit.

  8. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  9. Children's Responses to Contrasting 'Realistic' Mathematics Problems: Just How Realistic Are Children Ready To Be?

    ERIC Educational Resources Information Center

    Cooper, Barry; Harries, Tony

    2002-01-01

    Analyzes 11-12-year-old English children's responses to two 'realistic' problems. Through a comparison of responses to two items, suggests that, given suitable 'realistic' problems, many children may be more willing and able to introduce realistic responses in a testing context than earlier research might lead one to expect. (Author/MM)

  10. Stability of individual loudness functions obtained by magnitude estimation and production

    NASA Technical Reports Server (NTRS)

    Hellman, R. P.

    1981-01-01

    A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.

  11. Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions

    PubMed Central

    2017-01-01

    A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy. PMID:29209469

  12. Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions.

    PubMed

    Nantha, Yogarabindranath Swarna

    2017-11-01

    A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy.

  13. Optical Sensing of the Fatigue Damage State of CFRP under Realistic Aeronautical Load Sequences

    PubMed Central

    Zuluaga-Ramírez, Pablo; Arconada, Álvaro; Frövel, Malte; Belenguer, Tomás; Salazar, Félix

    2015-01-01

    We present an optical sensing methodology to estimate the fatigue damage state of structures made of carbon fiber reinforced polymer (CFRP), by measuring variations on the surface roughness. Variable amplitude loads (VAL), which represent realistic loads during aeronautical missions of fighter aircraft (FALSTAFF) have been applied to coupons until failure. Stiffness degradation and surface roughness variations have been measured during the life of the coupons obtaining a Pearson correlation of 0.75 between both variables. The data were compared with a previous study for Constant Amplitude Load (CAL) obtaining similar results. Conclusions suggest that the surface roughness measured in strategic zones is a useful technique for structural health monitoring of CFRP structures, and that it is independent of the type of load applied. Surface roughness can be measured in the field by optical techniques such as speckle, confocal perfilometers and interferometry, among others. PMID:25760056

  14. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  15. AGN neutrino flux estimates for a realistic hybrid model

    NASA Astrophysics Data System (ADS)

    Richter, S.; Spanier, F.

    2018-07-01

    Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.

  16. Deriving realistic source boundary conditions for a CFD simulation of concentrations in workroom air.

    PubMed

    Feigley, Charles E; Do, Thanh H; Khan, Jamil; Lee, Emily; Schnaufer, Nicholas D; Salzberg, Deborah C

    2011-05-01

    Computational fluid dynamics (CFD) is used increasingly to simulate the distribution of airborne contaminants in enclosed spaces for exposure assessment and control, but the importance of realistic boundary conditions is often not fully appreciated. In a workroom for manufacturing capacitors, full-shift samples for isoamyl acetate (IAA) were collected for 3 days at 16 locations, and velocities were measured at supply grills and at various points near the source. Then, velocity and concentration fields were simulated by 3-dimensional steady-state CFD using 295K tetrahedral cells, the k-ε turbulence model, standard wall function, and convergence criteria of 10(-6) for all scalars. Here, we demonstrate the need to represent boundary conditions accurately, especially emission characteristics at the contaminant source, and to obtain good agreement between observations and CFD results. Emission rates for each day were determined from six concentrations measured in the near field and one upwind using an IAA mass balance. The emission was initially represented as undiluted IAA vapor, but the concentrations estimated using CFD differed greatly from the measured concentrations. A second set of simulations was performed using the same IAA emission rates but a more realistic representation of the source. This yielded good agreement with measured values. Paying particular attention to the region with highest worker exposure potential-within 1.3 m of the source center-the air speed and IAA concentrations estimated by CFD were not significantly different from the measured values (P = 0.92 and P = 0.67, respectively). Thus, careful consideration of source boundary conditions greatly improved agreement with the measured values.

  17. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  18. Keeping It Real: How Realistic Does Realistic Fiction for Children Need to Be?

    ERIC Educational Resources Information Center

    O'Connor, Barbara

    2010-01-01

    O'Connor, an author of realistic fiction for children, shares her attempts to strike a balance between carefree, uncensored, authentic, realistic writing and age-appropriate writing. Of course, complicating that balancing act is the fact that what seems age-appropriate to her might not seem so to everyone. O'Connor suggests that while it may be…

  19. Method for obtaining structure and interactions from oriented lipid bilayers

    PubMed Central

    Lyatskaya, Yulia; Liu, Yufeng; Tristram-Nagle, Stephanie; Katsaras, John; Nagle, John F.

    2009-01-01

    Precise calculations are made of the scattering intensity I(q) from an oriented stack of lipid bilayers using a realistic model of fluctuations. The quantities of interest include the bilayer bending modulus Kc , the interbilayer interaction modulus B, and bilayer structure through the form factor F(qz). It is shown how Kc and B may be obtained from data at large qz where fluctuations dominate. Good estimates of F(qz) can be made over wide ranges of qz by using I(q) in q regions away from the peaks and for qr≠0 where details of the scattering domains play little role. Rough estimates of domain sizes can also be made from smaller qz data. Results are presented for data taken on fully hydrated, oriented DOPC bilayers in the Lα phase. These results illustrate the advantages of oriented samples compared to powder samples. PMID:11304287

  20. Simplification of an MCNP model designed for dose rate estimation

    NASA Astrophysics Data System (ADS)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  1. Obtaining Cue Rate Estimates for Some Mysticete Species using Existing Data

    DTIC Science & Technology

    2014-09-30

    primary focus is to obtain cue rates for humpback whales (Megaptera novaeangliae) off the California coast and on the PMRF range. To our knowledge, no... humpback whale cue rates have been calculated for these populations. Once a cue rate is estimated for the populations of humpback whales off the...rates for humpback whales on breeding grounds, in addition to average cue rates for other species of mysticete whales . Cue rates of several other

  2. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  3. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  4. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  5. Realistic Solar Surface Convection Simulations

    NASA Technical Reports Server (NTRS)

    Stein, Robert F.; Nordlund, Ake

    2000-01-01

    We perform essentially parameter free simulations with realistic physics of convection near the solar surface. We summarize the physics that is included and compare the simulation results with observations. Excellent agreement is obtained for the depth of the convection zone, the p-mode frequencies, the p-mode excitation rate, the distribution of the emergent continuum intensity, and the profiles of weak photospheric lines. We describe how solar convection is nonlocal. It is driven from a thin surface thermal boundary layer where radiative cooling produces low entropy gas which forms the cores of the downdrafts in which most of the buoyancy work occurs. We show that turbulence and vorticity are mostly confined to the intergranular lanes and underlying downdrafts. Finally, we illustrate our current work on magneto-convection.

  6. Realistic page-turning of electronic books

    NASA Astrophysics Data System (ADS)

    Fan, Chaoran; Li, Haisheng; Bai, Yannan

    2014-01-01

    The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.

  7. Realistic clocks, universal decoherence, and the black hole information paradox.

    PubMed

    Gambini, Rodolfo; Porto, Rafael A; Pullin, Jorge

    2004-12-10

    Ordinary quantum mechanics is formulated on the basis of the existence of an ideal classical clock external to the system under study. This is clearly an idealization. As emphasized originally by Salecker and Wigner and more recently by others, there exist limits in nature to how "classical" even the best possible clock can be. With realistic clocks, quantum mechanics ceases to be unitary and a fundamental mechanism of decoherence of quantum states arises. We estimate the rate of the universal loss of unitarity using optimal realistic clocks. In particular, we observe that the rate is rapid enough to eliminate the black hole information puzzle: all information is lost through the fundamental decoherence before the black hole can evaporate. This improves on a previous calculation we presented with a suboptimal clock in which only part of the information was lost by the time of evaporation.

  8. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    USGS Publications Warehouse

    Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.

  9. More realistic power estimation for new user, active comparator studies: an empirical example.

    PubMed

    Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til

    2016-04-01

    Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  11. Realistic vs sudden turn-on of natural incoherent light: Coherences and dynamics in molecular excitation and internal conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinev, Timur; Brumer, Paul

    2015-12-28

    Molecular excitation with incoherent light is examined using realistic turn-on time scales, and results are compared to those obtained via commonly used sudden turn-on, or pulses. Two significant results are obtained. First, in contrast to prior studies involving sudden turn-on, realistic turn-on is shown to lead to stationary coherences for natural turn-on time scales. Second, the time to reach the final stationary mixed state, known to result from incoherent excitation, is shown to depend directly on the inverse of the molecular energy level spacings, in both sudden and realistic turn-on cases. The S{sub 0} → S{sub 2}/S{sub 1} internal conversionmore » process in pyrazine is used as an example throughout. Implications for studies of natural light harvesting systems are noted.« less

  12. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  13. Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).

    PubMed

    Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C

    2016-08-01

    Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.

  14. IMM estimator with out-of-sequence measurements

    NASA Astrophysics Data System (ADS)

    Bar-Shalom, Yaakov; Chen, Huimin

    2004-08-01

    In multisensor tracking systems that operate in a centralized information processing architecture, measurements from the same target obtained by different sensors can arrive at the processing center out of sequence. In order to avoid either a delay in the output or the need for reordering and reprocessing an entire sequence of measurements, such measurements have to be processed as out-of-sequence measurements (OOSM). Recent work developed procedures for incorporating OOSMs into a Kalman filter (KF). Since the state of the art tracker for real (maneuvering) targets is the Interacting Multiple Model (IMM) estimator, this paper presents the algorithm for incorporating OOSMs into an IMM estimator. Both data association and estimation are considered. Simulation results are presented for two realistic problems using measurements from two airborne GMTI sensors. It is shown that the proposed algorithm for incorporating OOSMs into an IMM estimator yields practically the same performance as the reordering and in-sequence reprocessing of the measurements.

  15. Preliminary estimation of the realistic optimum temperature for vegetation growth in China.

    PubMed

    Cui, Yaoping

    2013-07-01

    The estimation of optimum temperature of vegetation growth is very useful for a wide range of applications such as agriculture and climate change studies. Thermal conditions substantially affect vegetation growth. In this study, the normalized difference vegetation index (NDVI) and daily temperature data set from 1982 to 2006 for China were used to examine optimum temperature of vegetation growth. Based on a simple analysis of ecological amplitude and Shelford's law of tolerance, a scientific framework for calculating the optimum temperature was constructed. The optimum temperature range and referenced optimum temperature (ROT) of terrestrial vegetation were obtained and explored over different eco-geographical regions of China. The results showed that the relationship between NDVI and air temperature was significant over almost all of China, indicating that terrestrial vegetation growth was closely related to thermal conditions. ROTs were different in various regions. The lowest ROT, about 7.0 °C, occurred in the Qinghai-Tibet Plateau, while the highest ROT, more than 22.0 °C, occurred in the middle and lower reaches of the Yangtze River and the Southern China region.

  16. Preliminary Estimation of the Realistic Optimum Temperature for Vegetation Growth in China

    NASA Astrophysics Data System (ADS)

    Cui, Yaoping

    2013-07-01

    The estimation of optimum temperature of vegetation growth is very useful for a wide range of applications such as agriculture and climate change studies. Thermal conditions substantially affect vegetation growth. In this study, the normalized difference vegetation index (NDVI) and daily temperature data set from 1982 to 2006 for China were used to examine optimum temperature of vegetation growth. Based on a simple analysis of ecological amplitude and Shelford's law of tolerance, a scientific framework for calculating the optimum temperature was constructed. The optimum temperature range and referenced optimum temperature (ROT) of terrestrial vegetation were obtained and explored over different eco-geographical regions of China. The results showed that the relationship between NDVI and air temperature was significant over almost all of China, indicating that terrestrial vegetation growth was closely related to thermal conditions. ROTs were different in various regions. The lowest ROT, about 7.0 °C, occurred in the Qinghai-Tibet Plateau, while the highest ROT, more than 22.0 °C, occurred in the middle and lower reaches of the Yangtze River and the Southern China region.

  17. Toward developing more realistic groundwater models using big data

    NASA Astrophysics Data System (ADS)

    Vahdat Aboueshagh, H.; Tsai, F. T. C.; Bhatta, D.; Paudel, K.

    2017-12-01

    Rich geological data is the backbone of developing realistic groundwater models for groundwater resources management. However, constructing realistic groundwater models can be challenging due to inconsistency between different sources of geological, hydrogeological and geophysical data and difficulty in processing big data to characterize the subsurface environment. This study develops a framework to utilize a big geological dataset to create a groundwater model for the Chicot Aquifer in the southwestern Louisiana, which borders on the Gulf of Mexico at south. The Chicot Aquifer is the principal source of fresh water in southwest Louisiana, underlying an area of about 9,000 square miles. Agriculture is the largest groundwater consumer in this region and overpumping has caused significant groundwater head decline and saltwater intrusion from the Gulf and deep formations. A hydrostratigraphy model was constructed using around 29,000 electrical logs and drillers' logs as well as screen lengths of pumping wells through a natural neighbor interpolation method. These sources of information have different weights in terms of accuracy and trustworthy. A data prioritization procedure was developed to filter untrustworthy log information, eliminate redundant data, and establish consensus of various lithological information. The constructed hydrostratigraphy model shows 40% sand facies, which is consistent with the well log data. The hydrostratigraphy model confirms outcrop areas of the Chicot Aquifer in the north of the study region. The aquifer sand formation is thinning eastward to merge into Atchafalaya River alluvial aquifer and coalesces to the underlying Evangeline aquifer. A grid generator was used to convert the hydrostratigraphy model into a MODFLOW grid with 57 layers. A Chicot groundwater model was constructed using the available hydrologic and hydrogeological data for 2004-2015. Pumping rates for irrigation wells were estimated using the crop type and acreage

  18. Improving the accuracy of S02 column densities and emission rates obtained from upward-looking UV-spectroscopic measurements of volcanic plumes by taking realistic radiative transfer into account

    USGS Publications Warehouse

    Kern, Christoph; Deutschmann, Tim; Werner, Cynthia; Sutton, A. Jeff; Elias, Tamar; Kelly, Peter J.

    2012-01-01

    Sulfur dioxide (SO2) is monitored using ultraviolet (UV) absorption spectroscopy at numerous volcanoes around the world due to its importance as a measure of volcanic activity and a tracer for other gaseous species. Recent studies have shown that failure to take realistic radiative transfer into account during the spectral retrieval of the collected data often leads to large errors in the calculated emission rates. Here, the framework for a new evaluation method which couples a radiative transfer model to the spectral retrieval is described. In it, absorption spectra are simulated, and atmospheric parameters are iteratively updated in the model until a best match to the measurement data is achieved. The evaluation algorithm is applied to two example Differential Optical Absorption Spectroscopy (DOAS) measurements conducted at Kilauea volcano (Hawaii). The resulting emission rates were 20 and 90% higher than those obtained with a conventional DOAS retrieval performed between 305 and 315 nm, respectively, depending on the different SO2 and aerosol loads present in the volcanic plume. The internal consistency of the method was validated by measuring and modeling SO2 absorption features in a separate wavelength region around 375 nm and comparing the results. Although additional information about the measurement geometry and atmospheric conditions is needed in addition to the acquired spectral data, this method for the first time provides a means of taking realistic three-dimensional radiative transfer into account when analyzing UV-spectral absorption measurements of volcanic SO2 plumes.

  19. The Electrostatic Instability for Realistic Pair Distributions in Blazar/EBL Cascades

    NASA Astrophysics Data System (ADS)

    Vafin, S.; Rafighi, I.; Pohl, M.; Niemiec, J.

    2018-04-01

    This work revisits the electrostatic instability for blazar-induced pair beams propagating through the intergalactic medium (IGM) using linear analysis and PIC simulations. We study the impact of the realistic distribution function of pairs resulting from the interaction of high-energy gamma-rays with the extragalactic background light. We present analytical and numerical calculations of the linear growth rate of the instability for the arbitrary orientation of wave vectors. Our results explicitly demonstrate that the finite angular spread of the beam dramatically affects the growth rate of the waves, leading to the fastest growth for wave vectors quasi-parallel to the beam direction and a growth rate at oblique directions that is only a factor of 2–4 smaller compared to the maximum. To study the nonlinear beam relaxation, we performed PIC simulations that take into account a realistic wide-energy distribution of beam particles. The parameters of the simulated beam-plasma system provide an adequate physical picture that can be extrapolated to realistic blazar-induced pairs. In our simulations, the beam looses only 1% of its energy, and we analytically estimate that the beam would lose its total energy over about 100 simulation times. An analytical scaling is then used to extrapolate the parameters of realistic blazar-induced pair beams. We find that they can dissipate their energy slightly faster by the electrostatic instability than through inverse-Compton scattering. The uncertainties arising from, e.g., details of the primary gamma-ray spectrum are too large to make firm statements for individual blazars, and an analysis based on their specific properties is required.

  20. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  1. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  2. ON THE MAGNETIC FIELD OF PULSARS WITH REALISTIC NEUTRON STAR CONFIGURATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvedere, R.; Rueda, Jorge A.; Ruffini, R., E-mail: riccardo.belvedere@icra.it, E-mail: jorge.rueda@icra.it, E-mail: ruffini@icra.it

    2015-01-20

    We have recently developed a neutron star model fulfilling global and not local charge neutrality, both in the static and in the uniformly rotating cases. The model is described by the coupled Einstein-Maxwell-Thomas-Fermi equations, in which all fundamental interactions are accounted for in the framework of general relativity and relativistic mean field theory. Uniform rotation is introduced following Hartle's formalism. We show that the use of realistic parameters of rotating neutron stars, obtained from numerical integration of the self-consistent axisymmetric general relativistic equations of equilibrium, leads to values of the magnetic field and radiation efficiency of pulsars that are verymore » different from estimates based on fiducial parameters that assume a neutron star mass M = 1.4 M {sub ☉}, radius R = 10 km, and moment of inertia I = 10{sup 45} g cm{sup 2}. In addition, we compare and contrast the magnetic field inferred from the traditional Newtonian rotating magnetic dipole model with respect to the one obtained from its general relativistic analog, which takes into account the effect of the finite size of the source. We apply these considerations to the specific high-magnetic field pulsar class and show that, indeed, all of these sources can be described as canonical pulsars driven by the rotational energy of the neutron star, and have magnetic fields lower than the quantum critical field for any value of the neutron star mass.« less

  3. Single point estimation of phenytoin dosing: a reappraisal.

    PubMed

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  4. Active and realistic passive marijuana exposure tested by three immunoassays and GC/MS in urine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mule, S.J.; Lomax, P.; Gross, S.J.

    Human urine samples obtained before and after active and passive exposure to marijuana were analyzed by immune kits (Roche, Amersham, and Syva) and gas chromatography/mass spectrometry (GC/MS). Seven of eight subjects were positive for the entire five-day test period with one immune kit. The latter correlated with GC/MS in 98% of the samples. Passive inhalation experiments under conditions likely to reflect realistic exposure resulted consistently in less than 10 ng/mL of cannabinoids. The 10-100-ng/mL cannabinoid concentration range essential for detection of occasional and moderate marijuana users is thus unaffected by realistic passive inhalation.

  5. Milky Way mass and potential recovery using tidal streams in a realistic halo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.

    2014-11-01

    We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less

  6. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  7. Postpositivist Realist Theory: Identity and Representation Revisited

    ERIC Educational Resources Information Center

    Gilpin, Lorraine S.

    2006-01-01

    In postpositivist realist theory, people like Paula Moya (2000) and Satya Mohanty (2000) make a space that at once reflects and informs my location as a Third-World woman of color and a Black-immigrant educator in the United States. In postpositivist realist theory, understanding emerges from one's past and present experiences and interactions as…

  8. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  9. Realist complex intervention science: Applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions

    PubMed Central

    Fletcher, Adam; Jamal, Farah; Moore, Graham; Evans, Rhiannon E.; Murphy, Simon; Bonell, Chris

    2016-01-01

    The integration of realist evaluation principles within randomised controlled trials (‘realist RCTs’) enables evaluations of complex interventions to answer questions about what works, for whom and under what circumstances. This allows evaluators to better develop and refine mid-level programme theories. However, this is only one phase in the process of developing and evaluating complex interventions. We describe and exemplify how social scientists can integrate realist principles across all phases of the Medical Research Council framework. Intervention development, modelling, and feasibility and pilot studies need to theorise the contextual conditions necessary for intervention mechanisms to be activated. Where interventions are scaled up and translated into routine practice, realist principles also have much to offer in facilitating knowledge about longer-term sustainability, benefits and harms. Integrating a realist approach across all phases of complex intervention science is vital for considering the feasibility and likely effects of interventions for different localities and population subgroups. PMID:27478401

  10. Developing a framework for a novel multi-disciplinary, multi-agency intervention(s), to improve medication management in community-dwelling older people on complex medication regimens (MEMORABLE)--a realist synthesis.

    PubMed

    Maidment, Ian; Booth, Andrew; Mullan, Judy; McKeown, Jane; Bailey, Sylvia; Wong, Geoffrey

    2017-07-03

    Medication-related adverse events have been estimated to be responsible for 5700 deaths and cost the UK £750 million annually. This burden falls disproportionately on older people. Outcomes from interventions to optimise medication management are caused by multiple context-sensitive mechanisms. The MEdication Management in Older people: REalist Approaches BAsed on Literature and Evaluation (MEMORABLE) project uses realist synthesis to understand how, why, for whom and in what context interventions, to improve medication management in older people on complex medication regimes residing in the community, work. This realist synthesis uses secondary data and primary data from interviews to develop the programme theory. A realist logic of analysis will synthesise data both within and across the two data sources to inform the design of a complex intervention(s) to help improve medication management in older people. 1. Literature review The review (using realist synthesis) contains five stages to develop an initial programme theory to understand why processes are more or less successful and under which situations: focussing of the research question; developing the initial programme theory; developing the search strategy; selection and appraisal based on relevance and rigour; and data analysis/synthesis to develop and refine the programme theory and context, intervention and mechanism configurations. 2. Realist interviews Realist interviews will explore and refine our understanding of the programme theory developed from the realist synthesis. Up to 30 older people and their informal carers (15 older people with multi-morbidity, 10 informal carers and 5 older people with dementia), and 20 care staff will be interviewed. 3. Developing framework for the intervention(s) Data from the realist synthesis and interviews will be used to refine the programme theory for the intervention(s) to identify: the mechanisms that need to be 'triggered', and the contexts related to these

  11. Performance of Airborne Precision Spacing Under Realistic Wind Conditions

    NASA Technical Reports Server (NTRS)

    Wieland, Frederick; Santos, Michel; Krueger, William; Houston, Vincent E.

    2011-01-01

    With the expected worldwide increase of air traffic during the coming decade, both the Federal Aviation Administration s (FAA s) Next Generation Air Transportation System (NextGen), as well as Eurocontrol s Single European Sky ATM Research (SESAR) program have, as part of their plans, air traffic management solutions that can increase performance without requiring time-consuming and expensive infrastructure changes. One such solution involves the ability of both controllers and flight crews to deliver aircraft to the runway with greater accuracy than is possible today. Previous research has shown that time-based spacing techniques, wherein the controller assigns a time spacing to each pair of arriving aircraft, is one way to achieve this goal by providing greater runway delivery accuracy that produces a concomitant increase in system-wide performance. The research described herein focuses on a specific application of time-based spacing, called Airborne Precision Spacing (APS), which has evolved over the past ten years. This research furthers APS understanding by studying its performance with realistic wind conditions obtained from atmospheric sounding data and with realistic wind forecasts obtained from the Rapid Update Cycle (RUC) short-range weather forecast. In addition, this study investigates APS performance with limited surveillance range, as provided by the Automatic Dependent Surveillance-Broadcast (ADS-B) system, and with an algorithm designed to improve APS performance when an ADS-B signal is unavailable. The results presented herein quantify the runway threshold delivery accuracy of APS un-der these conditions, and also quantify resulting workload metrics such as the number of speed changes required to maintain spacing.

  12. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  13. Adapting realist synthesis methodology: The case of workplace harassment interventions.

    PubMed

    Carr, Tracey; Quinlan, Elizabeth; Robertson, Susan; Gerrard, Angie

    2017-12-01

    Realist synthesis techniques can be used to assess complex interventions by extracting and synthesizing configurations of contexts, mechanisms, and outcomes found in the literature. Our novel and multi-pronged approach to the realist synthesis of workplace harassment interventions describes our pursuit of theory to link macro and program level theories. After discovering the limitations of a dogmatic approach to realist synthesis, we adapted our search strategy and focused our analysis on a subset of data. We tailored our realist synthesis to understand how, why, and under what circumstances workplace harassment interventions are effective. The result was a conceptual framework to test our theory-based interventions and provide the basis for subsequent realist evaluation. Our experience documented in this article contributes to an understanding of how, under what circumstances, and with what consequences realist synthesis principles can be customized. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Use of inequality constrained least squares estimation in small area estimation

    NASA Astrophysics Data System (ADS)

    Abeygunawardana, R. A. B.; Wickremasinghe, W. N.

    2017-05-01

    Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.

  15. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  17. Challenges in Obtaining Estimates of the Risk of Tuberculosis Infection During Overseas Deployment.

    PubMed

    Mancuso, James D; Geurts, Mia

    2015-12-01

    Estimates of the risk of tuberculosis (TB) infection resulting from overseas deployment among U.S. military service members have varied widely, and have been plagued by methodological problems. The purpose of this study was to estimate the incidence of TB infection in the U.S. military resulting from deployment. Three populations were examined: 1) a unit of 2,228 soldiers redeploying from Iraq in 2008, 2) a cohort of 1,978 soldiers followed up over 5 years after basic training at Fort Jackson in 2009, and 3) 6,062 participants in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The risk of TB infection in the deployed population was low-0.6% (95% confidence interval [CI]: 0.1-2.3%)-and was similar to the non-deployed population. The prevalence of latent TB infection (LTBI) in the U.S. population was not significantly different among deployed and non-deployed veterans and those with no military service. The limitations of these retrospective studies highlight the challenge in obtaining valid estimates of risk using retrospective data and the need for a more definitive study. Similar to civilian long-term travelers, risks for TB infection during deployment are focal in nature, and testing should be targeted to only those at increased risk. © The American Society of Tropical Medicine and Hygiene.

  18. Realistic Real-Time Outdoor Rendering in Augmented Reality

    PubMed Central

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  19. Realistic real-time outdoor rendering in augmented reality.

    PubMed

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.

  20. Generating realistic images using Kray

    NASA Astrophysics Data System (ADS)

    Tanski, Grzegorz

    2004-07-01

    Kray is an application for creating realistic images. It is written in C++ programming language, has a text-based interface, solves global illumination problem using techniques such as radiosity, path tracing and photon mapping.

  1. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  2. Interprofessional education in a student-led emergency department: A realist evaluation.

    PubMed

    Ericson, Anne; Löfgren, Susanne; Bolinder, Gunilla; Reeves, Scott; Kitto, Simon; Masiello, Italo

    2017-03-01

    This article reports a realist evaluation undertaken to identify factors that facilitated or hindered the successful implementation of interprofessional clinical training for undergraduate students in an emergency department. A realist evaluation provides a framework for understanding how the context and underlying mechanisms affect the outcome patterns of an intervention. The researchers gathered both qualitative and quantitative data from internal documents, semi-structured interviews, observations, and questionnaires to study what worked, for whom, and under what circumstances in this specific interprofessional setting. The study participants were medical, nursing, and physiotherapy students, their supervisors, and two members of the emergency department's management staff. The data analysis indicated that the emergency ward provided an excellent environment for interprofessional education (IPE), as attested by the students, supervisors, and the clinical managers. An essential prerequisite is that the students have obtained adequate skills to work independently. Exemplary conditions for IPE to work well in an emergency department demand the continuity of effective and encouraging supervision throughout the training period and supervisors who are knowledgeable about developing a team.

  3. A Local Realistic Reconciliation of the EPR Paradox

    NASA Astrophysics Data System (ADS)

    Sanctuary, Bryan

    2014-03-01

    The exact violation of Bell's Inequalities is obtained with a local realistic model for spin. The model treats one particle that comprises a quantum ensemble and simulates the EPR data one coincidence at a time as a product state. Such a spin is represented by operators σx , iσy ,σz in its body frame rather than the usual set of σX ,σY ,σZ in the laboratory frame. This model, assumed valid in the absence of a measuring probe, contains both quantum polarizations and coherences. Each carries half the EPR correlation, but only half can be measured using coincidence techniques. The model further predicts the filter angles that maximize the spin correlation in EPR experiments.

  4. On the applicability of the Natori formula to realistic multi-layer quantum well III-V FETs

    NASA Astrophysics Data System (ADS)

    Gili, A.; Xanthakis, J. P.

    2017-10-01

    We investigated the validity of the Natori formalism for realistic multi-layer quantum well FETs. We show that the assumption of a single layer (the channel) carrying all of the current density is far from reality in the sub-threshold region, where in fact most of the current density resides below the channel. Our analysis is based on comparing results of Natori calculations with experimental ones and on comparing with other first-principles calculations. If the Natori calculations are employed in the subthreshold region then a misleadingly small subthreshold slope would be obtained. We propose a way to remedy this inefficiency of this formulation so that it can be applicable to realistic many-layer devices. In particular we show that if the 1-dimensional quantum well of the Natori method enclosing the electron gas is expanded to include the supply layer-usually below the channel- and a proper ab initio potential is used to obtain its eigenvalues, then the Natori formula regains its validity.

  5. Realistic modeling of neurons and networks: towards brain simulation.

    PubMed

    D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    2013-01-01

    Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.

  6. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  7. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  8. A Radiosity Approach to Realistic Image Synthesis

    DTIC Science & Technology

    1992-12-01

    AD-A259 082 AFIT/GCE/ENG/92D-09 A RADIOSITY APPROACH TO REALISTIC IMAGE SYNTHESIS THESIS Richard L. Remington Captain, USAF fl ECTE AFIT/GCE/ENG/92D...09 SJANl 1993U 93-00134 Approved for public release; distribution unlimited 93& 1! A -A- AFIT/GCE/ENG/92D-09 A RADIOSITY APPROACH TO REALISTIC IMAGE...assistance in creating the input geometry file for the AWACS aircraft interior. Without his assistance, a good model for the diffuse radiosity implementation

  9. The first step toward genetic selection for host tolerance to infectious pathogens: obtaining the tolerance phenotype through group estimates

    PubMed Central

    Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias

    2012-01-01

    Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID

  10. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  11. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method

    NASA Astrophysics Data System (ADS)

    Forbes, B. T.

    2015-12-01

    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  12. Magnetic resonance fingerprinting based on realistic vasculature in mice

    PubMed Central

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K.; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K.; Thorin, E.; Sakadzic, Sava; Boas, David A.; Lesage, Frédéric

    2017-01-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO2), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO2, mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. PMID:28043909

  13. Magnetic resonance fingerprinting based on realistic vasculature in mice.

    PubMed

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K; Thorin, Eric; Sakadzic, Sava; Boas, David A; Lesage, Frédéric

    2017-04-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO 2 ), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO 2 , mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Estimates of the solar internal angular velocity obtained with the Mt. Wilson 60-foot solar tower

    NASA Technical Reports Server (NTRS)

    Rhodes, Edward J., Jr.; Cacciani, Alessandro; Woodard, Martin; Tomczyk, Steven; Korzennik, Sylvain

    1987-01-01

    Estimates are obtained of the solar internal angular velocity from measurements of the frequency splittings of p-mode oscillations. A 16-day time series of full-disk Dopplergrams obtained during July and August 1984 at the 60-foot tower telescope of the Mt. Wilson Observatory is analyzed. Power spectra were computed for all of the zonal, tesseral, and sectoral p-modes from l = 0 to 89 and for all of the sectoral p-modes from l = 90 to 200. A mean power spectrum was calculated for each degree up to 89. The frequency differences of all of the different nonzonal modes were calculated for these mean power spectra.

  15. Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons

    PubMed Central

    Buhusi, Catalin V.; Oprisan, Sorinel A.

    2013-01-01

    In most species, interval timing is time-scale invariant: errors in time estimation scale up linearly with the estimated duration. In mammals, time-scale invariance is ubiquitous over behavioral, lesion, and pharmacological manipulations. For example, dopaminergic drugs induce an immediate, whereas cholinergic drugs induce a gradual, scalar change in timing. Behavioral theories posit that time-scale invariance derives from particular computations, rules, or coding schemes. In contrast, we discuss a simple neural circuit, the perceptron, whose output neurons fire in a clockwise fashion (interval timing) based on the pattern of coincidental activation of its input neurons. We show numerically that time-scale invariance emerges spontaneously in a perceptron with realistic neurons, in the presence of noise. Under the assumption that dopaminergic drugs modulate the firing of input neurons, and that cholinergic drugs modulate the memory representation of the criterion time, we show that a perceptron with realistic neurons reproduces the pharmacological clock and memory patterns, and their time-scale invariance, in the presence of noise. These results suggest that rather than being a signature of higher-order cognitive processes or specific computations related to timing, time-scale invariance may spontaneously emerge in a massively-connected brain from the intrinsic noise of neurons and circuits, thus providing the simplest explanation for the ubiquity of scale invariance of interval timing. PMID:23518297

  16. Dynamic estimation of three-dimensional cerebrovascular deformation from rotational angiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Chong; Villa-Uriol, Maria-Cruz; De Craene, Mathieu

    2011-03-15

    Purpose: The objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall. Methods: The whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D+t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D+t measured projection sequence and the corresponding forward projections of themore » deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation. Results: Evaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall. Conclusions: The authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time

  17. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  18. Accurate Ray-tracing of Realistic Neutron Star Atmospheres for Constraining Their Parameters

    NASA Astrophysics Data System (ADS)

    Vincent, Frederic H.; Bejger, Michał; Różańska, Agata; Straub, Odele; Paumard, Thibaut; Fortin, Morgane; Madej, Jerzy; Majczyna, Agnieszka; Gourgoulhon, Eric; Haensel, Paweł; Zdunik, Leszek; Beldycki, Bartosz

    2018-03-01

    Thermal-dominated X-ray spectra of neutron stars in quiescent, transient X-ray binaries and neutron stars that undergo thermonuclear bursts are sensitive to mass and radius. The mass–radius relation of neutron stars depends on the equation of state (EoS) that governs their interior. Constraining this relation accurately is therefore of fundamental importance to understand the nature of dense matter. In this context, we introduce a pipeline to calculate realistic model spectra of rotating neutron stars with hydrogen and helium atmospheres. An arbitrarily fast-rotating neutron star with a given EoS generates the spacetime in which the atmosphere emits radiation. We use the LORENE/NROTSTAR code to compute the spacetime numerically and the ATM24 code to solve the radiative transfer equations self-consistently. Emerging specific intensity spectra are then ray-traced through the neutron star’s spacetime from the atmosphere to a distant observer with the GYOTO code. Here, we present and test our fully relativistic numerical pipeline. To discuss and illustrate the importance of realistic atmosphere models, we compare our model spectra to simpler models like the commonly used isotropic color-corrected blackbody emission. We highlight the importance of considering realistic model-atmosphere spectra together with relativistic ray-tracing to obtain accurate predictions. We also insist upon the crucial impact of the star’s rotation on the observables. Finally, we close a controversy that has been ongoing in the literature in the recent years, regarding the validity of the ATM24 code.

  19. A Low-cost System for Generating Near-realistic Virtual Actors

    NASA Astrophysics Data System (ADS)

    Afifi, Mahmoud; Hussain, Khaled F.; Ibrahim, Hosny M.; Omar, Nagwa M.

    2015-06-01

    Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor's body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.

  20. Problem Posing with Realistic Mathematics Education Approach in Geometry Learning

    NASA Astrophysics Data System (ADS)

    Mahendra, R.; Slamet, I.; Budiyono

    2017-09-01

    One of the difficulties of students in the learning of geometry is on the subject of plane that requires students to understand the abstract matter. The aim of this research is to determine the effect of Problem Posing learning model with Realistic Mathematics Education Approach in geometry learning. This quasi experimental research was conducted in one of the junior high schools in Karanganyar, Indonesia. The sample was taken using stratified cluster random sampling technique. The results of this research indicate that the model of Problem Posing learning with Realistic Mathematics Education Approach can improve students’ conceptual understanding significantly in geometry learning especially on plane topics. It is because students on the application of Problem Posing with Realistic Mathematics Education Approach are become to be active in constructing their knowledge, proposing, and problem solving in realistic, so it easier for students to understand concepts and solve the problems. Therefore, the model of Problem Posing learning with Realistic Mathematics Education Approach is appropriately applied in mathematics learning especially on geometry material. Furthermore, the impact can improve student achievement.

  1. Ambulatory estimation of mean step length during unconstrained walking by means of COG accelerometry.

    PubMed

    González, R C; Alvarez, D; López, A M; Alvarez, J C

    2009-12-01

    It has been reported that spatio-temporal gait parameters can be estimated using an accelerometer to calculate the vertical displacement of the body's centre of gravity. This method has the potential to produce realistic ambulatory estimations of those parameters during unconstrained walking. In this work, we want to evaluate the crude estimations of mean step length so obtained, for their possible application in the construction of an ambulatory walking distance measurement device. Two methods have been tested with a set of volunteers in 20 m excursions. Experimental results show that estimations of walking distance can be obtained with sufficient accuracy and precision for most practical applications (errors of 3.66 +/- 6.24 and 0.96 +/- 5.55%), the main difficulty being inter-individual variability (biggest deviations of 19.70 and 15.09% for each estimator). Also, the results indicate that an inverted pendulum model for the displacement during the single stance phase, and a constant displacement per step during double stance, constitute a valid model for the travelled distance with no need of further adjustments. It allows us to explain the main part of the erroneous distance estimations in different subjects as caused by fundamental limitations of the simple inverted pendulum approach.

  2. Novel high-fidelity realistic explosion damage simulation for urban environments

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya

    2010-04-01

    Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.

  3. SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution

    Science.gov Websites

    statistical summary of the U.S. distribution systems World-class, high spatial/temporal resolution of solar Systems and Scenarios | Grid Modernization | NREL SMART-DS: Synthetic Models for Advanced , Realistic Testing: Distribution Systems and Scenarios SMART-DS: Synthetic Models for Advanced, Realistic

  4. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  5. Time management: a realistic approach.

    PubMed

    Jackson, Valerie P

    2009-06-01

    Realistic time management and organization plans can improve productivity and the quality of life. However, these skills can be difficult to develop and maintain. The key elements of time management are goals, organization, delegation, and relaxation. The author addresses each of these components and provides suggestions for successful time management.

  6. Photo-Realistic Statistical Skull Morphotypes: New Exemplars for Ancestry and Sex Estimation in Forensic Anthropology.

    PubMed

    Caple, Jodi; Stephan, Carl N

    2017-05-01

    Graphic exemplars of cranial sex and ancestry are essential to forensic anthropology for standardizing casework, training analysts, and communicating group trends. To date, graphic exemplars have comprised hand-drawn sketches, or photographs of individual specimens, which risks bias/subjectivity. Here, we performed quantitative analysis of photographic data to generate new photo-realistic and objective exemplars of skull form. Standardized anterior and left lateral photographs of skulls for each sex were analyzed in the computer graphics program Psychomorph for the following groups: South African Blacks, South African Whites, American Blacks, American Whites, and Japanese. The average cranial form was calculated for each photographic view, before the color information for every individual was warped to the average form and combined to produce statistical averages. These mathematically derived exemplars-and their statistical exaggerations or extremes-retain the high-resolution detail of the original photographic dataset, making them the ideal casework and training reference standards. © 2016 American Academy of Forensic Sciences.

  7. Estimating hydrologic budgets for six Persian Gulf watersheds, Iran

    NASA Astrophysics Data System (ADS)

    Hosseini, Majid; Ghafouri, Mohammad; Tabatabaei, MahmoudReza; Goodarzi, Masoud; Mokarian, Zeinab

    2017-10-01

    Estimation of the major components of the hydrologic budget is important for determining the impacts on the water supply and quality of either planned or proposed land management projects, vegetative changes, groundwater withdrawals, and reservoir management practices and plans. As acquisition of field data is costly and time consuming, models have been created to test various land use practices and their concomitant effects on the hydrologic budget of watersheds. To simulate such management scenarios realistically, a model should be able to simulate the individual components of the hydrologic budget. The main objective of this study is to perform the SWAT2012 model for estimation of hydrological budget in six subbasin of Persian Gulf watershed; Golgol, Baghan, Marghab Shekastian, Tangebirim and Daragah, which are located in south and south west of Iran during 1991-2009. In order to evaluate the performance of the model, hydrological data, soil map, land use map and digital elevation model (DEM) are obtained and prepared for each catchment to run the model. SWAT-CUP with SUFI2 program was used for simulation, uncertainty and validation with 95 Percent Prediction Uncertainty. Coefficient of determination ( R 2) and Nash-Sutcliffe coefficient (NS) were used for evaluation of the model simulation results. Comparison of measured and predicted values demonstrated that each component of the model gave reasonable output and that the interaction among components was realistic. The study has produced a technique with reliable capability for annual and monthly water budget components in Persian Gulf watershed.

  8. Converging ligand-binding free energies obtained with free-energy perturbations at the quantum mechanical level.

    PubMed

    Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf

    2016-06-30

    In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  9. Converging ligand‐binding free energies obtained with free‐energy perturbations at the quantum mechanical level

    PubMed Central

    Olsson, Martin A.; Söderhjelm, Pär

    2016-01-01

    In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350

  10. Realistic thermodynamic and statistical-mechanical measures for neural synchronization.

    PubMed

    Kim, Sang-Yoon; Lim, Woochang

    2014-04-15

    Synchronized brain rhythms, associated with diverse cognitive functions, have been observed in electrical recordings of brain activity. Neural synchronization may be well described by using the population-averaged global potential VG in computational neuroscience. The time-averaged fluctuation of VG plays the role of a "thermodynamic" order parameter O used for describing the synchrony-asynchrony transition in neural systems. Population spike synchronization may be well visualized in the raster plot of neural spikes. The degree of neural synchronization seen in the raster plot is well measured in terms of a "statistical-mechanical" spike-based measure Ms introduced by considering the occupation and the pacing patterns of spikes. The global potential VG is also used to give a reference global cycle for the calculation of Ms. Hence, VG becomes an important collective quantity because it is associated with calculation of both O and Ms. However, it is practically difficult to directly get VG in real experiments. To overcome this difficulty, instead of VG, we employ the instantaneous population spike rate (IPSR) which can be obtained in experiments, and develop realistic thermodynamic and statistical-mechanical measures, based on IPSR, to make practical characterization of the neural synchronization in both computational and experimental neuroscience. Particularly, more accurate characterization of weak sparse spike synchronization can be achieved in terms of realistic statistical-mechanical IPSR-based measure, in comparison with the conventional measure based on VG. Copyright © 2014. Published by Elsevier B.V.

  11. Hydrostatic Equilibria of Rotating Stars with Realistic Equation of State

    NASA Astrophysics Data System (ADS)

    Yasutake, Nobutoshi; Fujisawa, Kotaro; Okawa, Hirotada; Yamada, Shoichi

    Stars rotate generally, but it is a non-trivial issue to obtain hydrostatic equilibria for rapidly rotating stars theoretically, especially for baroclinic cases, in which the pressure depends not only on the density, but also on the temperature and compositions. It is clear that the stellar structures with realistic equation of state are the baroclinic cases, but there are not so many studies for such equilibria. In this study, we propose two methods to obtain hydrostatic equilibria considering rotation and baroclinicity, namely the weak-solution method and the strong-solution method. The former method is based on the variational principle, which is also applied to the calculation of the inhomogeneous phases, known as the pasta structures, in crust of neutron stars. We found this method might break the balance equation locally, then introduce the strong-solution method. Note that our method is formulated in the mass coordinate, and it is hence appropriated for the stellar evolution calculations.

  12. Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  13. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume

  14. Approaching bathymetry estimation from high resolution multispectral satellite images using a neuro-fuzzy technique

    NASA Astrophysics Data System (ADS)

    Corucci, Linda; Masini, Andrea; Cococcioni, Marco

    2011-01-01

    This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.

  15. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    NASA Astrophysics Data System (ADS)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  16. Describing Site Amplification for Surface Waves in Realistic Basins

    NASA Astrophysics Data System (ADS)

    Bowden, D. C.; Tsai, V. C.

    2017-12-01

    Standard characterizations of site-specific site response assume a vertically-incident shear wave; given a 1D velocity profile, amplification and resonances can be calculated based on conservation of energy. A similar approach can be applied to surface waves, resulting in an estimate of amplification relative to a hard rock site that is different in terms of both amount of amplification and frequency. This prediction of surface-wave site amplification has been well validated through simple simulations, and in this presentation we explore the extent to which a 1D profile can explain observed amplifications in more realistic scenarios. Comparisons of various simple 2D and 3D simulations, for example, allow us to explore the effect of different basin shapes and the relative importance of effects such as focusing, conversion of wave-types and lateral surface wave resonances. Additionally, the 1D estimates for vertically-incident shear waves and for surface waves are compared to spectral ratios of historic events in deep sedimentary basins to demonstrate the appropriateness of the two different predictions. This difference in amplification responses between the wave types implies that a single measurement of site response, whether analytically calculated from 1D models or empirically observed, is insufficient for regions where surface waves play a strong role.

  17. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  18. Realistic Evaluation of Titanium Dioxide Nanoparticle Exposure in Chewing Gum.

    PubMed

    Fiordaliso, Fabio; Foray, Claudia; Salio, Monica; Salmona, Mario; Diomede, Luisa

    2018-06-20

    There is growing concern about the presence of nanoparticles (NPs) in titanium dioxide (TiO 2 ) as food additive (E171). To realistically estimate the number and the amount of TiO 2 NPs ingested with food, we applied a transmission electron microscopy method combined with inductively coupled plasma optical emission spectrometry. Different percentages of TiO 2 NPs (6-18%) were detected in E171 from various suppliers. In the eight chewing gums analyzed as food prototypes, TiO 2 NPs were absent in one sample and ranged 0.01-0.66 mg/gum, corresponding to 7-568 billion NPs/gum, in the other seven. We estimated that the mass-based TiO 2 NPs ingested with chewing gums by the European population ranged from 0.28 to 112.40 μg/kg b.w./day, and children ingested more nanosized titanium than adolescents and adults. Although this level may appear negligible it corresponds to 0.1-84 billion TiO 2 NPs/kg b.w/day, raising important questions regarding their potential accumulation in the body, possibly causing long-term effects on consumers' health.

  19. Realistic modeling of seismic input for megacities and large urban areas

    NASA Astrophysics Data System (ADS)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  20. An optimal pole-matching observer design for estimating tyre-road friction force

    NASA Astrophysics Data System (ADS)

    Faraji, Mohammad; Johari Majd, Vahid; Saghafi, Behrooz; Sojoodi, Mahdi

    2010-10-01

    In this paper, considering the dynamical model of tyre-road contacts, we design a nonlinear observer for the on-line estimation of tyre-road friction force using the average lumped LuGre model without any simplification. The design is the extension of a previously offered observer to allow a muchmore realistic estimation by considering the effect of the rolling resistance and a term related to the relative velocity in the observer. Our aim is not to introduce a new friction model, but to present a more accurate nonlinear observer for the assumed model. We derive linear matrix equality conditions to obtain an observer gain with minimum pole mismatch for the desired observer error dynamic system. We prove the convergence of the observer for the non-simplified model. Finally, we compare the performance of the proposed observer with that of the previously mentioned nonlinear observer, which shows significant improvement in the accuracy of estimation.

  1. Realistic nurse-led policy implementation, optimization and evaluation: novel methodological exemplar.

    PubMed

    Noyes, Jane; Lewis, Mary; Bennett, Virginia; Widdas, David; Brombley, Karen

    2014-01-01

    To report the first large-scale realistic nurse-led implementation, optimization and evaluation of a complex children's continuing-care policy. Health policies are increasingly complex, involve multiple Government departments and frequently fail to translate into better patient outcomes. Realist methods have not yet been adapted for policy implementation. Research methodology - Evaluation using theory-based realist methods for policy implementation. An expert group developed the policy and supporting tools. Implementation and evaluation design integrated diffusion of innovation theory with multiple case study and adapted realist principles. Practitioners in 12 English sites worked with Consultant Nurse implementers to manipulate the programme theory and logic of new decision-support tools and care pathway to optimize local implementation. Methods included key-stakeholder interviews, developing practical diffusion of innovation processes using key-opinion leaders and active facilitation strategies and a mini-community of practice. New and existing processes and outcomes were compared for 137 children during 2007-2008. Realist principles were successfully adapted to a shorter policy implementation and evaluation time frame. Important new implementation success factors included facilitated implementation that enabled 'real-time' manipulation of programme logic and local context to best-fit evolving theories of what worked; using local experiential opinion to change supporting tools to more realistically align with local context and what worked; and having sufficient existing local infrastructure to support implementation. Ten mechanisms explained implementation success and differences in outcomes between new and existing processes. Realistic policy implementation methods have advantages over top-down approaches, especially where clinical expertise is low and unlikely to diffuse innovations 'naturally' without facilitated implementation and local optimization. © 2013

  2. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  3. Robust mode space approach for atomistic modeling of realistically large nanowire transistors

    NASA Astrophysics Data System (ADS)

    Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard

    2018-01-01

    Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.

  4. Estimation of bare soil evaporation using multifrequency airborne SAR

    NASA Technical Reports Server (NTRS)

    Soares, Joao V.; Shi, Jiancheng; Van Zyl, Jakob; Engman, E. T.

    1992-01-01

    It is shown that for homogeneous areas soil moisture can be derived from synthetic aperture radar (SAR) measurements, so that the use of microwave remote sensing can given realistic estimates of energy fluxes if coupled to a simple two-layer model repesenting the soil. The model simulates volumetric water content (Wg) using classical meterological data, provided that some of the soil thermal and hydraulic properties are known. Only four parameters are necessary: mean water content, thermal conductivity and diffusitivity, and soil resistance to evaporation. They may be derived if a minimal number of measured values of Wg and surface layer temperature (Tg) are available together with independent measurements of energy flux to compare with the estimated values. The estimated evaporation is shown to be realistic and in good agreement with drying stage theory in which the transfer of water in the soil is in vapor form.

  5. Quantum mechanics without the projection postulate and its realistic interpretation

    NASA Astrophysics Data System (ADS)

    Dieks, D.

    1989-11-01

    It is widely held that quantum mechanics is the first scientific theory to present scientifically internal, fundamental difficulties for a realistic interpretation (in the philosophical sense). The standard (Copenhagen) interpretation of the quantum theory is often described as the inevitable instrumentalistic response. It is the purpose of the present article to argue that quantum theory does not present fundamental new problems to a realistic interpretation. The formalism of quantum theory has the same states—it will be argued—as the formalisms of older physical theories and is capable of the same kinds of philosophical interpretation. This result is reached via an analysis of what it means to give a realistic interpretation to a theory. The main point of difference between quantum mechanics and other theories—as far as the possibilities of interpretation are concerned—is the special treatment given to measurement by the “projection postulate.” But it is possible to do without this postulate. Moreover, rejection of the projection postulate does not, in spite of what is often maintained in the literature, automatically lead to the many-worlds interpretation of quantum mechanics. A realistic interpretation is possible in which only the reality of one (our) world is recognized. It is argued that the Copenhagen interpretation as expounded by Bohr is not in conflict with the here proposed realistic interpretation of quantum theory.

  6. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are othermore » examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers. Third, we have systematically considered the aquifer contaminants at different locations in plumes at other DOE sites in order to determine whether MNA is a broadly applicable remediation strategy for chlorinated hydrocarbons (North Wind Inc.). Realistic terms for co-metabolism of TCE will provide marked improvements in DOE’s ability to predict

  7. Novel Micropatterned Cardiac Cell Cultures with Realistic Ventricular Microstructure

    PubMed Central

    Badie, Nima; Bursac, Nenad

    2009-01-01

    Systematic studies of cardiac structure-function relationships to date have been hindered by the intrinsic complexity and variability of in vivo and ex vivo model systems. Thus, we set out to develop a reproducible cell culture system that can accurately replicate the realistic microstructure of native cardiac tissues. Using cell micropatterning techniques, we aligned cultured cardiomyocytes at micro- and macroscopic spatial scales to follow local directions of cardiac fibers in murine ventricular cross sections, as measured by high-resolution diffusion tensor magnetic resonance imaging. To elucidate the roles of ventricular tissue microstructure in macroscopic impulse conduction, we optically mapped membrane potentials in micropatterned cardiac cultures with realistic tissue boundaries and natural cell orientation, cardiac cultures with realistic tissue boundaries but random cell orientation, and standard isotropic monolayers. At 2 Hz pacing, both microscopic changes in cell orientation and ventricular tissue boundaries independently and synergistically increased the spatial dispersion of conduction velocity, but not the action potential duration. The realistic variations in intramural microstructure created unique spatial signatures in micro- and macroscopic impulse propagation within ventricular cross-section cultures. This novel in vitro model system is expected to help bridge the existing gap between experimental structure-function studies in standard cardiac monolayers and intact heart tissues. PMID:19413993

  8. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  9. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    PubMed

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  10. Implementing Realistic Helicopter Physics in 3D Game Environments

    DTIC Science & Technology

    2002-09-01

    developed a highly realistic and innovative PC video game that puts you inside an Army unit. You’ll face your first tour of duty along with your fellow...helicopter physics. Many other video games include helicopters but omit realistic third person helicopter behaviors in their applications. Of the 48...to be too computationally expensive for a PC based video game . Generally, some basic parts of blade element theory are present in any attempt to

  11. Family Relationships in Realistic Young Adult Fiction, 1987 to 1991.

    ERIC Educational Resources Information Center

    Sampson, Cathie

    The purpose of this study was to determine how parents and family relationships are characterized in realistic young adult fiction. A random sample of 20 realistic young adult novels was selected from the American Library Association's Best Lists for the years 1987-1991. A content analysis of the novels focused on the following: (1) whether…

  12. Blend Shape Interpolation and FACS for Realistic Avatar

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila

    2015-03-01

    The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.

  13. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  14. Obtaining Parts

    Science.gov Websites

    The Cosmic Connection Parts for the Berkeley Detector Suppliers: Scintillator Eljen Technology 1 obtain the components needed to build the Berkeley Detector. These companies have helped previous the last update. He estimates that the cost to build a detector varies from $1500 to $2700 depending

  15. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  16. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  17. Estimation of the size of drug-like chemical space based on GDB-17 data.

    PubMed

    Polishchuk, P G; Madzhidov, T I; Varnek, A

    2013-08-01

    The goal of this paper is to estimate the number of realistic drug-like molecules which could ever be synthesized. Unlike previous studies based on exhaustive enumeration of molecular graphs or on combinatorial enumeration preselected fragments, we used results of constrained graphs enumeration by Reymond to establish a correlation between the number of generated structures (M) and the number of heavy atoms (N): logM = 0.584 × N × logN + 0.356. The number of atoms limiting drug-like chemical space of molecules which follow Lipinsky's rules (N = 36) has been obtained from the analysis of the PubChem database. This results in M ≈ 10³³ which is in between the numbers estimated by Ertl (10²³) and by Bohacek (10⁶⁰).

  18. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  19. Realistic loophole-free Bell test with atom-photon entanglement

    NASA Astrophysics Data System (ADS)

    Teo, C.; Araújo, M.; Quintino, M. T.; Minář, J.; Cavalcanti, D.; Scarani, V.; Terra Cunha, M.; França Santos, M.

    2013-07-01

    The establishment of nonlocal correlations, guaranteed through the violation of a Bell inequality, is not only important from a fundamental point of view but constitutes the basis for device-independent quantum information technologies. Although several nonlocality tests have been conducted so far, all of them suffered from either locality or detection loopholes. Among the proposals for overcoming these problems are the use of atom-photon entanglement and hybrid photonic measurements (for example, photodetection and homodyning). Recent studies have suggested that the use of atom-photon entanglement can lead to Bell inequality violations with moderate transmission and detection efficiencies. Here we combine these ideas and propose an experimental setup realizing a simple atom-photon entangled state that can be used to obtain nonlocality when considering realistic experimental parameters including detection efficiencies and losses due to required propagation distances.

  20. Using digital colour to increase the realistic appearance of SEM micrographs of bloodstains.

    PubMed

    Hortolà, Policarp

    2010-10-01

    Although in the scientific-research literature the micrographs from scanning electron microscopes (SEMs) are usually displayed in greyscale, the potential of colour resources provided by the SEM-coupled image-acquiring systems and, subsidiarily, by image-manipulation free softwares deserves be explored as a tool for colouring SEM micrographs of bloodstains. After acquiring greyscale SEM micrographs of a (dark red to the naked eye) human blood smear on grey chert, they were manually obtained in red tone using both the SEM-coupled image-acquiring system and an image-manipulation free software, as well as they were automatically generated in thermal tone using the SEM-coupled system. Red images obtained by the SEM-coupled system demonstrated lower visual-discrimination capability than the other coloured images, whereas those in red generated by the free software rendered better magnitude of scopic information than the red images generated by the SEM-coupled system. Thermal-tone images, although were further from the real sample colour than the red ones, not only increased their realistic appearance over the greyscale images, but also yielded the best visual-discrimination capability among all the coloured SEM micrographs, and fairly enhanced the relief effect of the SEM micrographs over both the greyscale and the red images. The application of digital colour by means of the facilities provided by an SEM-coupled image-acquiring system or, when required, by an image-manipulation free software provides a user-friendly, quick and inexpensive way of obtaining coloured SEM micrographs of bloodstains, avoiding to do sophisticated, time-consuming colouring procedures. Although this work was focused on bloodstains, well probably other monochromatic or quasi-monochromatic samples are also susceptible of increasing their realistic appearance by colouring them using the simple methods utilized in this study.

  1. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    NASA Astrophysics Data System (ADS)

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-08-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  2. Obtaining parsimonious hydraulic conductivity fields using head and transport observations: A Bayesian geostatistical parameter estimation approach

    USGS Publications Warehouse

    Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.

    2009-01-01

    Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  4. Feasibility of Measuring Mean Vertical Motion for Estimating Advection. Chapter 6

    NASA Technical Reports Server (NTRS)

    Vickers, Dean; Mahrt, L.

    2005-01-01

    Numerous recent studies calculate horizontal and vertical advection terms for budget studies of net ecosystem exchange of carbon. One potential uncertainty in such studies is the estimate of mean vertical motion. This work addresses the reliability of vertical advection estimates by contrasting the vertical motion obtained from the standard practise of measuring the vertical velocity and applying a tilt correction, to the vertical motion calculated from measurements of the horizontal divergence of the flow using a network of towers. Results are compared for three different tilt correction methods. Estimates of mean vertical motion are sensitive to the choice of tilt correction method. The short-term mean (10 to 60 minutes) vertical motion based on the horizontal divergence is more realistic compared to the estimates derived from the standard practise. The divergence shows long-term mean (days to months) sinking motion at the site, apparently due to the surface roughness change. Because all the tilt correction methods rely on the assumption that the long-term mean vertical motion is zero for a given wind direction, they fail to reproduce the vertical motion based on the divergence.

  5. Assessing methane emission estimation methods based on atmospheric measurements from oil and gas production using LES simulations

    NASA Astrophysics Data System (ADS)

    Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.

    2017-12-01

    There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation

  6. A methodological approach to a realistic evaluation of skin absorbed doses during manipulation of radioactive sources by means of GAMOS Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio

    2018-05-01

    The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.

  7. Analysis and Modeling of Realistic Compound Channels in Transparent Relay Transmissions

    PubMed Central

    Kanjirathumkal, Cibile K.; Mohammed, Sameer S.

    2014-01-01

    Analytical approaches for the characterisation of the compound channels in transparent multihop relay transmissions over independent fading channels are considered in this paper. Compound channels with homogeneous links are considered first. Using Mellin transform technique, exact expressions are derived for the moments of cascaded Weibull distributions. Subsequently, two performance metrics, namely, coefficient of variation and amount of fade, are derived using the computed moments. These metrics quantify the possible variations in the channel gain and signal to noise ratio from their respective average values and can be used to characterise the achievable receiver performance. This approach is suitable for analysing more realistic compound channel models for scattering density variations of the environment, experienced in multihop relay transmissions. The performance metrics for such heterogeneous compound channels having distinct distribution in each hop are computed and compared with those having identical constituent component distributions. The moments and the coefficient of variation computed are then used to develop computationally efficient estimators for the distribution parameters and the optimal hop count. The metrics and estimators proposed are complemented with numerical and simulation results to demonstrate the impact of the accuracy of the approaches. PMID:24701175

  8. Hyper-realistic face masks: a new challenge in person identification.

    PubMed

    Sanders, Jet Gabrielle; Ueda, Yoshiyuki; Minemoto, Kazusa; Noyes, Eilidh; Yoshikawa, Sakiko; Jenkins, Rob

    2017-01-01

    We often identify people using face images. This is true in occupational settings such as passport control as well as in everyday social environments. Mapping between images and identities assumes that facial appearance is stable within certain bounds. For example, a person's apparent age, gender and ethnicity change slowly, if at all. It also assumes that deliberate changes beyond these bounds (i.e., disguises) would be easy to spot. Hyper-realistic face masks overturn these assumptions by allowing the wearer to look like an entirely different person. If unnoticed, these masks break the link between facial appearance and personal identity, with clear implications for applied face recognition. However, to date, no one has assessed the realism of these masks, or specified conditions under which they may be accepted as real faces. Herein, we examined incidental detection of unexpected but attended hyper-realistic masks in both photographic and live presentations. Experiment 1 (UK; n = 60) revealed no evidence for overt detection of hyper-realistic masks among real face photos, and little evidence of covert detection. Experiment 2 (Japan; n = 60) extended these findings to different masks, mask-wearers and participant pools. In Experiment 3 (UK and Japan; n = 407), passers-by failed to notice that a live confederate was wearing a hyper-realistic mask and showed limited evidence of covert detection, even at close viewing distance (5 vs. 20 m). Across all of these studies, viewers accepted hyper-realistic masks as real faces. Specific countermeasures will be required if detection rates are to be improved.

  9. A convenient method of obtaining percentile norms and accompanying interval estimates for self-report mood scales (DASS, DASS-21, HADS, PANAS, and sAD).

    PubMed

    Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka

    2009-06-01

    A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.

  10. Lateral eddy diffusivity estimates from simulated and observed drifter trajectories: a case study for the Agulhas Current system

    NASA Astrophysics Data System (ADS)

    Rühs, Siren; Zhurbas, Victor; Durgadoo, Jonathan V.; Biastoch, Arne

    2017-04-01

    The Lagrangian description of fluid motion by sets of individual particle trajectories is extensively used to characterize connectivity between distinct oceanic locations. One important factor influencing the connectivity is the average rate of particle dispersal, generally quantified as Lagrangian diffusivity. In addition to Lagrangian observing programs, Lagrangian analyses are performed by advecting particles with the simulated flow field of ocean general circulation models (OGCMs). However, depending on the spatio-temporal model resolution, not all scale-dependent processes are explicitly resolved in the simulated velocity fields. Consequently, the dispersal of advective Lagrangian trajectories has been assumed not to be sufficiently diffusive compared to observed particle spreading. In this study we present a detailed analysis of the spatially variable lateral eddy diffusivity characteristics of advective drifter trajectories simulated with realistically forced OGCMs and compare them with estimates based on observed drifter trajectories. The extended Agulhas Current system around South Africa, known for its intricate mesoscale dynamics, serves as a test case. We show that a state-of-the-art eddy-resolving OGCM indeed features theoretically derived dispersion characteristics for diffusive regimes and realistically represents Lagrangian eddy diffusivity characteristics obtained from observed surface drifter trajectories. The estimates for the maximum and asymptotic lateral single-particle eddy diffusivities obtained from the observed and simulated drifter trajectories show a good agreement in their spatial pattern and magnitude. We further assess the sensitivity of the simulated lateral eddy diffusivity estimates to the temporal and lateral OGCM output resolution and examine the impact of the different eddy diffusivity characteristics on the Lagrangian connectivity between the Indian Ocean and the South Atlantic.

  11. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  12. Performance of Airborne Precision Spacing Under Realistic Wind Conditions and Limited Surveillance Range

    NASA Technical Reports Server (NTRS)

    Wieland, Frederick; Santos, Michel; Krueger, William; Houston, Vincent E.

    2011-01-01

    With the expected worldwide increase of air traffic during the coming decade, both the Federal Aviation Administration's (FAA's) Next Generation Air Transportation System (NextGen), as well as Eurocontrol's Single European Sky ATM Research (SESAR) program have, as part of their plans, air traffic management (ATM) solutions that can increase performance without requiring time-consuming and expensive infrastructure changes. One such solution involves the ability of both controllers and flight crews to deliver aircraft to the runway with greater accuracy than they can today. Previous research has shown that time-based spacing techniques, wherein the controller assigns a time spacing to each pair of arriving aircraft, can achieve this goal by providing greater runway delivery accuracy and producing a concomitant increase in system-wide performance. The research described herein focuses on one specific application of time-based spacing, called Airborne Precision Spacing (APS), which has evolved over the past ten years. This research furthers APS understanding by studying its performance with realistic wind conditions obtained from atmospheric sounding data and with realistic wind forecasts obtained from the Rapid Update Cycle (RUC) short-range weather forecast. In addition, this study investigates APS performance with limited surveillance range, as provided by the Automatic Dependent Surveillance-Broadcast (ADS-B) system, and with an algorithm designed to improve APS performance when ADS-B surveillance data is unavailable. The results presented herein quantify the runway threshold delivery accuracy of APS under these conditions, and also quantify resulting workload metrics such as the number of speed changes required to maintain spacing.

  13. Toward Realistic Acquisition Schedule Estimates

    DTIC Science & Technology

    2016-04-30

    Harrier successor. NASA and the UK both participated in this effort. It was merged by Congress with JAST in mid-1994.  CALF (Common Affordable...5th Generation revolution in military aviation. Space Daily. Sternstein, A. (2015, April 27). Here’s how you hack a military drone. Next Gov

  14. Making a Literature Methods Course "Realistic."

    ERIC Educational Resources Information Center

    Lewis, William J.

    Recognizing that it can be a challenge to make an undergraduate literature methods course realistic, a methods instructor at a Michigan university has developed three major and several minor activities that have proven effective in preparing pre-student teachers for the "real world" of teaching and, at the same time, have been challenging and…

  15. Dimits shift in realistic gyrokinetic plasma-turbulence simulations.

    PubMed

    Mikkelsen, D R; Dorland, W

    2008-09-26

    In simulations of turbulent plasma transport due to long wavelength (k perpendicular rhoi < or = 1) electrostatic drift-type instabilities, we find a persistent nonlinear up-shift of the effective threshold. Next-generation tokamaks will likely benefit from the higher effective threshold for turbulent transport, and transport models should incorporate suitable corrections to linear thresholds. The gyrokinetic simulations reported here are more realistic than previous reports of a Dimits shift because they include nonadiabatic electron dynamics, strong collisional damping of zonal flows, and finite electron and ion collisionality together with realistic shaped magnetic geometry. Reversing previously reported results based on idealized adiabatic electrons, we find that increasing collisionality reduces the heat flux because collisionality reduces the nonadiabatic electron microinstability drive.

  16. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  17. Incorporating diverse data and realistic complexity into demographic estimation procedures for sea otters

    USGS Publications Warehouse

    Tinker, M. Timothy; Doak, Daniel F.; Estes, James A.; Hatfield, Brian B.; Staedler, Michelle M.; Gross, Arthur

    2006-01-01

    Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks. ?? 2006 by the Ecological Society of America.

  18. Two-Capacitor Problem: A More Realistic View.

    ERIC Educational Resources Information Center

    Powell, R. A.

    1979-01-01

    Discusses the two-capacitor problem by considering the self-inductance of the circuit used and by determining how well the usual series RC circuit approximates the two-capacitor problem when realistic values of L, C, and R are chosen. (GA)

  19. Satellite Maps Deliver More Realistic Gaming

    NASA Technical Reports Server (NTRS)

    2013-01-01

    When Redwood City, California-based Electronic Arts (EA) decided to make SSX, its latest snowboarding video game, it faced challenges in creating realistic-looking mountains. The solution was NASA's ASTER Global Digital Elevation Map, made available by the Jet Propulsion Laboratory, which EA used to create 28 real-life mountains from 9 different ranges for its award-winning game.

  20. MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance

    PubMed Central

    2014-01-01

    Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441

  1. The differential effect of realistic and unrealistic counterfactual thinking on regret.

    PubMed

    Sevdalis, Nick; Kokkinaki, Flora

    2006-06-01

    Research has established that realistic counterfactual thinking can determine the intensity and the content of people's affective reactions to decision outcomes and events. Not much is known, however, about the affective consequences of counterfactual thinking that is unrealistic (i.e., that does not correspond to the main causes of a negative outcome). In three experiments, we investigate the influence of realistic and unrealistic counterfactuals on experienced regret after negative outcomes. In Experiment 1, we found that participants who thought unrealistically about a poor outcome reported less regret than those who thought realistically about it. In Experiments 2a and 2b, we replicated this finding and we showed that the decrease in regret was associated with a shift in the causal attributions of the poor outcome. Participants who thought unrealistically attributed it more to external circumstances and less to their own behaviours than those who thought realistically about it. We discuss the implications of these findings for the role of counterfactuals as self-serving biases and the functionality of regret as a counterfactual emotion.

  2. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    PubMed

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  3. Climate Sensitivity to Realistic Solar Heating of Snow and Ice

    NASA Astrophysics Data System (ADS)

    Flanner, M.; Zender, C. S.

    2004-12-01

    Snow and ice-covered surfaces are highly reflective and play an integral role in the planetary radiation budget. However, GCMs typically prescribe snow reflection and absorption based on minimal knowledge of snow physical characteristics. We performed climate sensitivity simulations with the NCAR CCSM including a new physically-based multi-layer snow radiative transfer model. The model predicts the effects of vertically resolved heating, absorbing aerosol, and snowpack transparency on snowpack evolution and climate. These processes significantly reduce the model's near-infrared albedo bias over deep snowpacks. While the current CCSM implementation prescribes all solar radiative absorption to occur in the top 2 cm of snow, we estimate that about 65% occurs beneath this level. Accounting for the vertical distribution of snowpack heating and more realistic reflectance significantly alters snowpack depth, surface albedo, and surface air temperature over Northern Hemisphere regions. Implications for the strength of the ice-albedo feedback will be discussed.

  4. Estimating migratory game-bird productivity by integrating age ratio and banding data

    USGS Publications Warehouse

    Zimmerman, G.S.; Link, W.A.; Conroy, M.J.; Sauer, J.R.; Richkus, K.D.; Boomer, G. Scott

    2010-01-01

    Implications: Several national and international management strategies for migratory game birds in North America rely on measures of productivity from harvest survey parts collections, without a justification of the estimator or providing estimates of precision. We derive an estimator of productivity with realistic measures of uncertainty that can be directly incorporated into management plans or ecological studies across large spatial scales.

  5. Greenhouse gases inventory and carbon balance of two dairy systems obtained from two methane-estimation methods.

    PubMed

    Cunha, C S; Lopes, N L; Veloso, C M; Jacovine, L A G; Tomich, T R; Pereira, L G R; Marcondes, M I

    2016-11-15

    The adoption of carbon inventories for dairy farms in tropical countries based on models developed from animals and diets of temperate climates is questionable. Thus, the objectives of this study were to estimate enteric methane (CH4) emissions through the SF6 tracer gas technique and through equations proposed by the Intergovernmental Panel on Climate Change (IPCC) Tier 2 and to calculate the inventory of greenhouse gas (GHG) emissions from two dairy systems. In addition, the carbon balance of these properties was estimated using enteric CH4 emissions obtained using both methodologies. In trial 1, the CH4 emissions were estimated from seven Holstein dairy cattle categories based on the SF6 tracer gas technique and on IPCC equations. The categories used in the study were prepubertal heifers (n=6); pubertal heifers (n=4); pregnant heifers (n=5); high-producing (n=6); medium-producing (n=5); low-producing (n=4) and dry cows (n=5). Enteric methane emission was higher for the category comprising prepubertal heifers when estimated by the equations proposed by the IPCC Tier 2. However, higher CH4 emissions were estimated by the SF6 technique in the categories including medium- and high-producing cows and dry cows. Pubertal heifers, pregnant heifers, and low-producing cows had equal CH4 emissions as estimated by both methods. In trial 2, two dairy farms were monitored for one year to identify all activities that contributed in any way to GHG emissions. The total emission from Farm 1 was 3.21t CO2e/animal/yr, of which 1.63t corresponded to enteric CH4. Farm 2 emitted 3.18t CO2e/animal/yr, with 1.70t of enteric CH4. IPCC estimations can underestimate CH4 emissions from some categories while overestimate others. However, considering the whole property, these discrepancies are offset and we would submit that the equations suggested by the IPCC properly estimate the total CH4 emission and carbon balance of the properties. Thus, the IPCC equations should be utilized with

  6. Computing the electric field from extensive air showers using a realistic description of the atmosphere

    NASA Astrophysics Data System (ADS)

    Gaté, F.; Revenu, B.; García-Fernández, D.; Marin, V.; Dallier, R.; Escudié, A.; Martin, L.

    2018-03-01

    The composition of ultra-high energy cosmic rays is still poorly known and constitutes a very important topic in the field of high-energy astrophysics. Detection of ultra-high energy cosmic rays is carried out via the extensive air showers they create after interacting with the atmosphere constituents. The secondary electrons and positrons within the showers emit a detectable electric field in the kHz-GHz range. It is possible to use this radio signal for the estimation of the atmospheric depth of maximal development of the showers Xmax , with a good accuracy and a duty cycle close to 100%. This value of Xmax is strongly correlated to the nature of the primary cosmic ray that initiated the shower. We show in this paper the importance of using a realistic atmospheric model in order to correct for systematic errors that can prevent a correct and unbiased estimation of Xmax.

  7. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    PubMed Central

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  8. Estimation of dynamic time activity curves from dynamic cardiac SPECT imaging

    NASA Astrophysics Data System (ADS)

    Hossain, J.; Du, Y.; Links, J.; Rahmim, A.; Karakatsanis, N.; Akhbardeh, A.; Lyons, J.; Frey, E. C.

    2015-04-01

    Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-activity curve, an essential component needed for estimating the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of 99mTc-teboroxime imaging. Uptake of 99mTc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to estimate time activity curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of estimated TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the estimation procedure. The method produced good estimates of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the activity distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still estimated well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the estimation procedure, the estimated normal myocardial and defect TACs were accurate (average weighted absolute bias ≈5% for a defect with 25% extent and 100% severity).

  9. The Performance of Chinese Primary School Students on Realistic Arithmetic Word Problems

    ERIC Educational Resources Information Center

    Xin, Ziqiang; Lin, Chongde; Zhang, Li; Yan, Rong

    2007-01-01

    Compared with standard arithmetic word problems demanding only the direct use of number operations and computations, realistic problems are harder to solve because children need to incorporate "real-world" knowledge into their solutions. Using the realistic word problem testing materials developed by Verschaffel, De Corte, and Lasure…

  10. The realist interpretation of the atmosphere

    NASA Astrophysics Data System (ADS)

    Anduaga, Aitor

    The discovery of a clearly stratified structure of layers in the upper atmosphere has been--and still is--invoked too often as the great paradigm of atmospheric sciences in the 20th century. Behind this vision, an emphasis--or better, an overstatement--on the reality of the concept of layer lies. One of the few historians of physics who have not ignored this phenomenon of reification, C. Stewart Gillmor, attributed it to--somewhat ambiguous-- cultural (or perhaps, more generally, contextual) factors, though he never specified their nature. In this essay, I aim to demonstrate that, in the interwar years, most radiophysicists and some atomic physicists, for reasons principally related to extrinsic influences and to a lesser extent to internal developments of their own science, fervidly embraced a realist interpretation of the ionosphere. We will focus on the historical circumstances in which a specific social and commercial environment came to exert a strong influence on upper atmospheric physicists, and in which realism as a product validating the "truth" of certain practices and beliefs arose. This realist commitment I attribute to the mutual reinforcement of atmospheric physics and commercial and imperial interests in long-distance communications.

  11. Epidemiology and causation: a realist view.

    PubMed Central

    Renton, A

    1994-01-01

    In this paper the controversy over how to decide whether associations between factors and diseases are causal is placed within a description of the public health and scientific relevance of epidemiology. It is argued that the rise in popularity of the Popperian view of science, together with a perception of the aims of epidemiology as being to identify appropriate public health interventions, have focussed this debate on unresolved questions of inferential logic, leaving largely unanalysed the notions of causation and of disease at the ontological level. A realist ontology of causation of disease and pathogenesis is constructed within the framework of "scientific materialism", and is shown to provide a coherent basis from which to decide causes and to deal with problems of confounding and interaction in epidemiological research. It is argued that a realist analysis identifies a richer role for epidemiology as an integral part of an ontologically unified medical science. It is this unified medical science as a whole rather than epidemiological observation or experiment which decides causes and, in turn, provides a key element to the foundations of rational public health decision making. PMID:8138775

  12. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  13. A realistic evaluation: the case of protocol-based care

    PubMed Central

    2010-01-01

    Background 'Protocol based care' was envisioned by policy makers as a mechanism for delivering on the service improvement agenda in England. Realistic evaluation is an increasingly popular approach, but few published examples exist, particularly in implementation research. To fill this gap, within this paper we describe the application of a realistic evaluation approach to the study of protocol-based care, whilst sharing findings of relevance about standardising care through the use of protocols, guidelines, and pathways. Methods Situated between positivism and relativism, realistic evaluation is concerned with the identification of underlying causal mechanisms, how they work, and under what conditions. Fundamentally it focuses attention on finding out what works, for whom, how, and in what circumstances. Results In this research, we were interested in understanding the relationships between the type and nature of particular approaches to protocol-based care (mechanisms), within different clinical settings (context), and what impacts this resulted in (outcomes). An evidence review using the principles of realist synthesis resulted in a number of propositions, i.e., context, mechanism, and outcome threads (CMOs). These propositions were then 'tested' through multiple case studies, using multiple methods including non-participant observation, interviews, and document analysis through an iterative analysis process. The initial propositions (conjectured CMOs) only partially corresponded to the findings that emerged during analysis. From the iterative analysis process of scrutinising mechanisms, context, and outcomes we were able to draw out some theoretically generalisable features about what works, for whom, how, and what circumstances in relation to the use of standardised care approaches (refined CMOs). Conclusions As one of the first studies to apply realistic evaluation in implementation research, it was a good fit, particularly given the growing emphasis on

  14. Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient

    PubMed Central

    Dongaonkar, R. M.; Laine, G. A.; Stewart, R. H.

    2011-01-01

    Microvascular permeability to water is characterized by the microvascular filtration coefficient (Kf). Conventional gravimetric techniques to estimate Kf rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate Kf estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce Kf from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to Kf and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of Kf in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245

  15. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  16. Realistic weight perception and body size assessment in a racially diverse community sample of dieters.

    PubMed

    Cachelin, F M; Striegel-Moore, R H; Elder, K A

    1998-01-01

    Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.

  17. New Radiation Dosimetry Estimates for [18F]FLT based on Voxelized Phantoms.

    PubMed

    Mendes, B M; Ferreira, A V; Nascimento, L T C; Ferreira, S M Z M D; Silveira, M B; Silva, J B

    2018-04-25

    3'-Deoxy-3-[ 18 F]fluorothymidine, or [ 18 F]FLT, is a positron emission tomography (PET) tracer used in clinical studies for noninvasive assessment of proliferation activity in several types of cancer. Although the use of this PET tracer is expanding, to date, few studies concerning its dosimetry have been published. In this work, new [ 18 F]FLT dosimetry estimates are determined for human and mice using Monte Carlo simulations. Modern voxelized male and female phantoms and [ 18 F]FLT biokinetic data, both published by the ICRP, were used for simulations of human cases. For most human organs/tissues the absorbed doses were higher than those reported in ICRP Publication 128. An effective dose of 1.70E-02 mSv/MBq to the whole body was determined, which is 13.5% higher than the ICRP reference value. These new human dosimetry estimates obtained using more realistic human phantoms represent an advance in the knowledge of [ 18 F]FLT dosimetry. In addition, mice biokinetic data were obtained experimentally. These data and a previously developed voxelized mouse phantom were used for simulations of animal cases. Concerning animal dosimetry, absorbed doses for organs/tissues ranged from 4.47 ± 0.75 to 155.74 ± 59.36 mGy/MBq. The obtained set of organ/tissue radiation doses for healthy Swiss mice is a useful tool for application in animal experiment design.

  18. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  19. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  20. Variations of High-Latitude Geomagnetic Pulsation Frequencies: A Comparison of Time-of-Flight Estimates and IMAGE Magnetometer Observations

    NASA Astrophysics Data System (ADS)

    Sandhu, J. K.; Yeoman, T. K.; James, M. K.; Rae, I. J.; Fear, R. C.

    2018-01-01

    The fundamental eigenfrequencies of standing Alfvén waves on closed geomagnetic field lines are estimated for the region spanning 5.9≤L < 9.5 over all MLT (Magnetic Local Time). The T96 magnetic field model and a realistic empirical plasma mass density model are employed using the time-of-flight approximation, refining previous calculations that assumed a relatively simplistic mass density model. An assessment of the implications of using different mass density models in the time-of-flight calculations is presented. The calculated frequencies exhibit dependences on field line footprint magnetic latitude and MLT, which are attributed to both magnetic field configuration and spatial variations in mass density. In order to assess the validity of the time-of-flight calculated frequencies, the estimates are compared to observations of FLR (Field Line Resonance) frequencies. Using IMAGE (International Monitor for Auroral Geomagnetic Effects) ground magnetometer observations obtained between 2001 and 2012, an automated FLR identification method is developed, based on the cross-phase technique. The average FLR frequency is determined, including variations with footprint latitude and MLT, and compared to the time-of-flight analysis. The results show agreement in the latitudinal and local time dependences. Furthermore, with the use of the realistic mass density model in the time-of-flight calculations, closer agreement with the observed FLR frequencies is obtained. The study is limited by the latitudinal coverage of the IMAGE magnetometer array, and future work will aim to extend the ground magnetometer data used to include additional magnetometer arrays.

  1. Realistic anomaly-mediated supersymmetry breaking

    NASA Astrophysics Data System (ADS)

    Chacko, Zacharia; Luty, Markus A.; Maksymyk, Ivan; Pontón, Eduardo

    2000-03-01

    We consider supersymmetry breaking communicated entirely by the superconformal anomaly in supergravity. This scenario is naturally realized if supersymmetry is broken in a hidden sector whose couplings to the observable sector are suppressed by more than powers of the Planck scale, as occurs if supersymmetry is broken in a parallel universe living in extra dimensions. This scenario is extremely predictive: soft supersymmetry breaking couplings are completely determined by anomalous dimensions in the effective theory at the weak scale. Gaugino and scalar masses are naturally of the same order, and flavor-changing neutral currents are automatically suppressed. The most glaring problem with this scenario is that slepton masses are negative in the minimal supersymmetric standard model. We point out that this problem can be simply solved by coupling extra Higgs doublets to the leptons. Lepton flavor-changing neutral currents can be naturally avoided by approximate symmetries. We also describe more speculative solutions involving compositeness near the weak scale. We then turn to electroweak symmetry breaking. Adding an explicit μ term gives a value for Bμ that is too large by a factor of ~ 100. We construct a realistic model in which the μ term arises from the vacuum expectation value of a singlet field, so all weak-scale masses are directly related to m3/2. We show that fully realistic electroweak symmetry breaking can occur in this model with moderate fine-tuning.

  2. Faculty Development for Educators: A Realist Evaluation

    ERIC Educational Resources Information Center

    Sorinola, Olanrewaju O.; Thistlethwaite, Jill; Davies, David; Peile, Ed

    2015-01-01

    The effectiveness of faculty development (FD) activities for educators in UK medical schools remains underexplored. This study used a realist approach to evaluate FD and to test the hypothesis that motivation, engagement and perception are key mechanisms of effective FD activities. The authors observed and interviewed 33 course participants at one…

  3. Estimating the diversity of dinosaurs

    NASA Astrophysics Data System (ADS)

    Wang, Steve C.; Dodson, Peter

    2006-09-01

    Despite current interest in estimating the diversity of fossil and extant groups, little effort has been devoted to estimating the diversity of dinosaurs. Here we estimate the diversity of nonavian dinosaurs at ≈1,850 genera, including those that remain to be discovered. With 527 genera currently described, at least 71% of dinosaur genera thus remain unknown. Although known diversity declined in the last stage of the Cretaceous, estimated diversity was steady, suggesting that dinosaurs as a whole were not in decline in the 10 million years before their ultimate extinction. We also show that known diversity is biased by the availability of fossiliferous rock outcrop. Finally, by using a logistic model, we predict that 75% of discoverable genera will be known within 60-100 years and 90% within 100-140 years. Because of nonrandom factors affecting the process of fossil discovery (which preclude the possibility of computing realistic confidence bounds), our estimate of diversity is likely to be a lower bound.

  4. Comparison of Species Richness Estimates Obtained Using Nearly Complete Fragments and Simulated Pyrosequencing-Generated Fragments in 16S rRNA Gene-Based Environmental Surveys▿ †

    PubMed Central

    Youssef, Noha; Sheik, Cody S.; Krumholz, Lee R.; Najar, Fares Z.; Roe, Bruce A.; Elshahed, Mostafa S.

    2009-01-01

    Pyrosequencing-based 16S rRNA gene surveys are increasingly utilized to study highly diverse bacterial communities, with special emphasis on utilizing the large number of sequences obtained (tens to hundreds of thousands) for species richness estimation. However, it is not yet clear how the number of operational taxonomic units (OTUs) and, hence, species richness estimates determined using shorter fragments at different taxonomic cutoffs correlates with the number of OTUs assigned using longer, nearly complete 16S rRNA gene fragments. We constructed a 16S rRNA clone library from an undisturbed tallgrass prairie soil (1,132 clones) and used it to compare species richness estimates obtained using eight pyrosequencing candidate fragments (99 to 361 bp in length) and the nearly full-length fragment. Fragments encompassing the V1 and V2 (V1+V2) region and the V6 region (generated using primer pairs 8F-338R and 967F-1046R) overestimated species richness; fragments encompassing the V3, V7, and V7+V8 hypervariable regions (generated using primer pairs 338F-530R, 1046F-1220R, and 1046F-1392R) underestimated species richness; and fragments encompassing the V4, V5+V6, and V6+V7 regions (generated using primer pairs 530F-805R, 805F-1046R, and 967F-1220R) provided estimates comparable to those obtained with the nearly full-length fragment. These patterns were observed regardless of the alignment method utilized or the parameter used to gauge comparative levels of species richness (number of OTUs observed, slope of scatter plots of pairwise distance values for short and nearly complete fragments, and nonparametric and parametric species richness estimates). Similar results were obtained when analyzing three other datasets derived from soil, adult Zebrafish gut, and basaltic formations in the East Pacific Rise. Regression analysis indicated that these observed discrepancies in species richness estimates within various regions could readily be explained by the proportions of

  5. Electromagnetic Scattering from Realistic Targets

    NASA Technical Reports Server (NTRS)

    Lee, Shung- Wu; Jin, Jian-Ming

    1997-01-01

    The general goal of the project is to develop computational tools for calculating radar signature of realistic targets. A hybrid technique that combines the shooting-and-bouncing-ray (SBR) method and the finite-element method (FEM) for the radiation characterization of microstrip patch antennas in a complex geometry was developed. In addition, a hybridization procedure to combine moment method (MoM) solution and the SBR method to treat the scattering of waveguide slot arrays on an aircraft was developed. A list of journal articles and conference papers is included.

  6. Simulation of realistic retinoscopic measurement

    NASA Astrophysics Data System (ADS)

    Tan, Bo; Chen, Ying-Ling; Baker, K.; Lewis, J. W.; Swartz, T.; Jiang, Y.; Wang, M.

    2007-03-01

    Realistic simulation of ophthalmic measurements on normal and diseased eyes is presented. We use clinical data of ametropic and keratoconus patients to construct anatomically accurate three-dimensional eye models and simulate the measurement of a streak retinoscope with all the optical elements. The results show the clinical observations including the anomalous motion in high myopia and the scissors reflex in keratoconus. The demonstrated technique can be applied to other ophthalmic instruments and to other and more extensively abnormal eye conditions. It provides promising features for medical training and for evaluating and developing ocular instruments.

  7. Realistic finite temperature simulations of magnetic systems using quantum statistics

    NASA Astrophysics Data System (ADS)

    Bergqvist, Lars; Bergman, Anders

    2018-01-01

    We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.

  8. Design for and efficient dynamic climate model with realistic geography

    NASA Technical Reports Server (NTRS)

    Suarez, M. J.; Abeles, J.

    1984-01-01

    The long term climate sensitivity which include realistic atmospheric dynamics are severely restricted by the expense of integrating atmospheric general circulation models are discussed. Taking as an example models used at GSFC for this dynamic model is an alternative which is of much lower horizontal or vertical resolution. The model of Heid and Suarez uses only two levels in the vertical and, although it has conventional grid resolution in the meridional direction, horizontal resolution is reduced by keeping only a few degrees of freedom in the zonal wavenumber spectrum. Without zonally asymmetric forcing this model simulates a day in roughly 1/2 second on a CRAY. The model under discussion is a fully finite differenced, zonally asymmetric version of the Heid-Suarez model. It is anticipated that speeds can be obtained a few seconds a day roughly 50 times faster than moderate resolution, multilayer GCM's.

  9. Realistic Planning for the Day Care Consumer.

    ERIC Educational Resources Information Center

    Emlen, Arthur C.

    This paper questions public attitudes of disparagement toward child care that is privately arranged in neighborhood homes, and cites research to show that the widespread non-use of organized facilities is based on realistic alternative patterns of day care behavior. Some determinants of day care use are discussed, and an understanding of…

  10. Improving Intuition Skills with Realistic Mathematics Education

    ERIC Educational Resources Information Center

    Hirza, Bonita; Kusumah, Yaya S.; Darhim; Zulkardi

    2014-01-01

    The intention of the present study was to see the improvement of students' intuitive skills. This improvement was seen by comparing the Realistic Mathematics Education (RME)-based instruction with the conventional mathematics instruction. The subject of this study was 164 fifth graders of elementary school in Palembang. The design of this study…

  11. Estimation of brittleness indices for pay zone determination in a shale-gas reservoir by using elastic properties obtained from micromechanics

    NASA Astrophysics Data System (ADS)

    Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime

    2018-04-01

    The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.

  12. Comparison of student's learning achievement through realistic mathematics education (RME) approach and problem solving approach on grade VII

    NASA Astrophysics Data System (ADS)

    Ilyas, Muhammad; Salwah

    2017-02-01

    The type of this research was experiment. The purpose of this study was to determine the difference and the quality of student's learning achievement between students who obtained learning through Realistic Mathematics Education (RME) approach and students who obtained learning through problem solving approach. This study was a quasi-experimental research with non-equivalent experiment group design. The population of this study was all students of grade VII in one of junior high school in Palopo, in the second semester of academic year 2015/2016. Two classes were selected purposively as sample of research that was: year VII-5 as many as 28 students were selected as experiment group I and VII-6 as many as 23 students were selected as experiment group II. Treatment that used in the experiment group I was learning by RME Approach, whereas in the experiment group II by problem solving approach. Technique of data collection in this study gave pretest and posttest to students. The analysis used in this research was an analysis of descriptive statistics and analysis of inferential statistics using t-test. Based on the analysis of descriptive statistics, it can be concluded that the average score of students' mathematics learning after taught using problem solving approach was similar to the average results of students' mathematics learning after taught using realistic mathematics education (RME) approach, which are both at the high category. In addition, It can also be concluded that; (1) there was no difference in the results of students' mathematics learning taught using realistic mathematics education (RME) approach and students who taught using problem solving approach, (2) quality of learning achievement of students who received RME approach and problem solving approach learning was same, which was at the high category.

  13. Modeling of shallow and inefficient convection in the outer layers of the Sun using realistic physics

    NASA Technical Reports Server (NTRS)

    Kim, Yong-Cheol; Fox, Peter A.; Sofia, Sabatino; Demarque, Pierre

    1995-01-01

    In an attempt to understand the properties of convective energy transport in the solar convective zone, a numerical model has been constructed for turbulent flows in a compressible, radiation-coupled, nonmagnetic, gravitationally stratified medium using a realistic equation of state and realistic opacities. The time-dependent, three-dimensional hydrodynamic equations are solved with minimal simplifications. The statistical information obtained from the present simulation provides an improved undserstanding of solar photospheric convection. The characteristics of solar convection in shallow regions is parameterized and compared with the results of Chan & Sofia's (1989) simulations of deep and efficient convection. We assess the importance of the zones of partial ionization in the simulation and confirm that the radiative energy transfer is negliglble throughout the region except in the uppermost scale heights of the convection zone, a region of very high superadiabaticity. When the effects of partial ionization are included, the dynamics of flows are altered significantly. However, we confirm the Chan & Sofia result that kinetic energy flux is nonnegligible and can have a negative value in the convection zone.

  14. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  15. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  16. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  17. Regional 3-D Modeling of Ground Geoelectric Field for the Northeast United States due to Realistic Geomagnetic Disturbances

    NASA Astrophysics Data System (ADS)

    Ivannikova, E.; Kruglyakov, M.; Kuvshinov, A. V.; Rastaetter, L.; Pulkkinen, A. A.; Ngwira, C. M.

    2017-12-01

    During extreme space weather events electric currents in the Earth's magnetosphere and ionosphere experience large variations, which leads to dramatic intensification of the fluctuating magnetic field at the surface of the Earth. According to Faraday's law of induction, the fluctuating geomagnetic field in turn induces electric field that generates harmful currents (so-called "geomagnetically induced currents"; GICs) in grounded technological systems. Understanding (via modeling) of the spatio-temporal evolution of the geoelectric field during enhanced geomagnetic activity is a key consideration in estimating the hazard to technological systems from space weather. We present the results of ground geoelectric field modeling for the Northeast United States, which is performed with the use of our novel numerical tool based on integral equation approach. The tool exploits realistic regional three-dimensional (3-D) models of the Earth's electrical conductivity and realistic global models of the spatio-temporal evolution of the magnetospheric and ionospheric current systems responsible for geomagnetic disturbances. We also explore in detail the manifestation of the coastal effect (anomalous intensification of the geoelectric field near the coasts) in this region.

  18. Improving atomic displacement and replacement calculations with physically realistic damage models.

    PubMed

    Nordlund, Kai; Zinkle, Steven J; Sand, Andrea E; Granberg, Fredric; Averback, Robert S; Stoller, Roger; Suzudo, Tomoaki; Malerba, Lorenzo; Banhart, Florian; Weber, William J; Willaime, Francois; Dudarev, Sergei L; Simeone, David

    2018-03-14

    Atomic collision processes are fundamental to numerous advanced materials technologies such as electron microscopy, semiconductor processing and nuclear power generation. Extensive experimental and computer simulation studies over the past several decades provide the physical basis for understanding the atomic-scale processes occurring during primary displacement events. The current international standard for quantifying this energetic particle damage, the Norgett-Robinson-Torrens displacements per atom (NRT-dpa) model, has nowadays several well-known limitations. In particular, the number of radiation defects produced in energetic cascades in metals is only ~1/3 the NRT-dpa prediction, while the number of atoms involved in atomic mixing is about a factor of 30 larger than the dpa value. Here we propose two new complementary displacement production estimators (athermal recombination corrected dpa, arc-dpa) and atomic mixing (replacements per atom, rpa) functions that extend the NRT-dpa by providing more physically realistic descriptions of primary defect creation in materials and may become additional standard measures for radiation damage quantification.

  19. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  20. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  1. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  2. Estimation of electric fields and current from ground-based magnetometer data

    NASA Technical Reports Server (NTRS)

    Kamide, Y.; Richmond, A. D.

    1984-01-01

    Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.

  3. Realistic Affective Forecasting: The Role of Personality

    PubMed Central

    Hoerger, Michael; Chapman, Ben; Duberstein, Paul

    2016-01-01

    Affective forecasting often drives decision making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the ‘realistic paradigm’ in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesized that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine’s Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesized, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts. PMID:26212463

  4. Realistic affective forecasting: The role of personality.

    PubMed

    Hoerger, Michael; Chapman, Ben; Duberstein, Paul

    2016-11-01

    Affective forecasting often drives decision-making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the "realistic paradigm" in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesised that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine's Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesised, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts.

  5. Toxicokinetics of perfluorooctane sulfonate in birds under environmentally realistic exposure conditions and development of a kinetic predictive model.

    PubMed

    Tarazona, J V; Rodríguez, C; Alonso, E; Sáez, M; González, F; San Andrés, M D; Jiménez, B; San Andrés, M I

    2015-01-22

    This article describes the toxicokinetics of perfluorooctane sulfonate (PFOS) in birds under low repeated dosing, equivalent to 0.085 μg/kg per day, representing environmentally realistic exposure conditions. The best fitting was provided by a simple pseudo monocompartmental first-order kinetics model, regulated by two rates, with a pseudo first-order dissipation half-life of 230 days, accounting for real elimination as well as binding of PFOS to non-exchangeable structures. The calculated assimilation efficiency was 0.66 with confidence intervals of 0.64 and 0.68. The model calculations confirmed that the measured maximum concentrations were still far from the steady state situation, which for this dose regime, was estimated at a value of about 65 μg PFOS/L serum achieved after a theoretical 210 weeks continuous exposure. The results confirm a very different kinetics than that observed in single-dose experiments confirming clear dose-related differences in apparent elimination rates in birds, as described for humans and monkeys; suggesting that a capacity-limited saturable process should also be considered in the kinetic behavior of PFOS in birds. Pseudo first-order kinetic models are highly convenient and frequently used for predicting bioaccumulation of chemicals in livestock and wildlife; the study suggests that previous bioaccumulation models using half-lives obtained at high doses are expected to underestimate the biomagnification potential of PFOS. The toxicokinetic parameters presented here can be used for higher-tier bioaccumulation estimations of PFOS in chickens and as surrogate values for modeling PFOS kinetics in wild bird species. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  7. Coarse-grained versus atomistic simulations: realistic interaction free energies for real proteins.

    PubMed

    May, Ali; Pool, René; van Dijk, Erik; Bijlard, Jochem; Abeln, Sanne; Heringa, Jaap; Feenstra, K Anton

    2014-02-01

    To assess whether two proteins will interact under physiological conditions, information on the interaction free energy is needed. Statistical learning techniques and docking methods for predicting protein-protein interactions cannot quantitatively estimate binding free energies. Full atomistic molecular simulation methods do have this potential, but are completely unfeasible for large-scale applications in terms of computational cost required. Here we investigate whether applying coarse-grained (CG) molecular dynamics simulations is a viable alternative for complexes of known structure. We calculate the free energy barrier with respect to the bound state based on molecular dynamics simulations using both a full atomistic and a CG force field for the TCR-pMHC complex and the MP1-p14 scaffolding complex. We find that the free energy barriers from the CG simulations are of similar accuracy as those from the full atomistic ones, while achieving a speedup of >500-fold. We also observe that extensive sampling is extremely important to obtain accurate free energy barriers, which is only within reach for the CG models. Finally, we show that the CG model preserves biological relevance of the interactions: (i) we observe a strong correlation between evolutionary likelihood of mutations and the impact on the free energy barrier with respect to the bound state; and (ii) we confirm the dominant role of the interface core in these interactions. Therefore, our results suggest that CG molecular simulations can realistically be used for the accurate prediction of protein-protein interaction strength. The python analysis framework and data files are available for download at http://www.ibi.vu.nl/downloads/bioinformatics-2013-btt675.tgz.

  8. Beyond the realist turn: a socio-material analysis of heart failure self-care.

    PubMed

    McDougall, Allan; Kinsella, Elizabeth Anne; Goldszmidt, Mark; Harkness, Karen; Strachan, Patricia; Lingard, Lorelei

    2018-01-01

    For patients living with chronic illnesses, self-care has been linked with positive outcomes such as decreased hospitalisation, longer lifespan, and improved quality of life. However, despite calls for more and better self-care interventions, behaviour change trials have repeatedly fallen short on demonstrating effectiveness. The literature on heart failure (HF) stands as a case in point, and a growing body of HF studies advocate realist approaches to self-care research and policymaking. We label this trend the 'realist turn' in HF self-care. Realist evaluation and realist interventions emphasise that the relationship between self-care interventions and positive health outcomes is not fixed, but contingent on social context. This paper argues socio-materiality offers a productive framework to expand on the idea of social context in realist accounts of HF self-care. This study draws on 10 interviews as well as researcher reflections from a larger study exploring health care teams for patients with advanced HF. Leveraging insights from actor-network theory (ANT), this study provides two rich narratives about the contextual factors that influence HF self-care. These descriptions portray not self-care contexts but self-care assemblages, which we discuss in light of socio-materiality. © 2018 Foundation for the Sociology of Health & Illness.

  9. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy

    PubMed Central

    Li, Zhaohui; Li, Xiaoli

    2013-01-01

    Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding. PMID:23940662

  10. Test suite for image-based motion estimation of the brain and tongue

    NASA Astrophysics Data System (ADS)

    Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.

    2017-03-01

    Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that

  11. Test Suite for Image-Based Motion Estimation of the Brain and Tongue

    PubMed Central

    Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.

    2017-01-01

    Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that

  12. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  13. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  14. Role-playing for more realistic technical skills training.

    PubMed

    Nikendei, C; Zeuch, A; Dieckmann, P; Roth, C; Schäfer, S; Völkl, M; Schellberg, D; Herzog, W; Jünger, J

    2005-03-01

    Clinical skills are an important and necessary part of clinical competence. Simulation plays an important role in many fields of medical education. Although role-playing is common in communication training, there are no reports about the use of student role-plays in the training of technical clinical skills. This article describes an educational intervention with analysis of pre- and post-intervention self-selected student survey evaluations. After one term of skills training, a thorough evaluation showed that the skills-lab training did not seem very realistic nor was it very demanding for trainees. To create a more realistic training situation and to enhance students' involvement, case studies and role-plays with defined roles for students (i.e. intern, senior consultant) were introduced into half of the sessions. Results of the evaluation in the second term showed that sessions with role-playing were rated significantly higher than sessions without role-playing.

  15. Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  16. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  17. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  18. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  19. Realistic Simulations of Coronagraphic Observations with Future Space Telescopes

    NASA Astrophysics Data System (ADS)

    Rizzo, M. J.; Roberge, A.; Lincowski, A. P.; Zimmerman, N. T.; Juanola-Parramon, R.; Pueyo, L.; Hu, M.; Harness, A.

    2017-11-01

    We present a framework to simulate realistic observations of future space-based coronagraphic instruments. This gathers state-of-the-art scientific and instrumental expertise allowing robust characterization of future instrument concepts.

  20. Assessing the performance of dynamical trajectory estimates

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  1. Assessing the performance of dynamical trajectory estimates.

    PubMed

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  2. Shutterless solution for simultaneous focal plane array temperature estimation and nonuniformity correction in uncooled long-wave infrared camera.

    PubMed

    Cao, Yanpeng; Tisse, Christel-Loic

    2013-09-01

    In uncooled long-wave infrared (LWIR) microbolometer imaging systems, temperature fluctuations of the focal plane array (FPA) result in thermal drift and spatial nonuniformity. In this paper, we present a novel approach based on single-image processing to simultaneously estimate temperature variances of FPAs and compensate the resulting temperature-dependent nonuniformity. Through well-controlled thermal calibrations, empirical behavioral models are derived to characterize the relationship between the responses of microbolometer and FPA temperature variations. Then, under the assumption that strong dependency exists between spatially adjacent pixels, we estimate the optimal FPA temperature so as to minimize the global intensity variance across the entire thermal infrared image. We make use of the estimated FPA temperature to infer an appropriate nonuniformity correction (NUC) profile. The performance and robustness of the proposed temperature-adaptive NUC method are evaluated on realistic IR images obtained by a 640 × 512 pixels uncooled LWIR microbolometer imaging system operating in a significantly changed temperature environment.

  3. The effect of a realistic thermal diffusivity on numerical model of a subducting slab

    NASA Astrophysics Data System (ADS)

    Maierova, P.; Steinle-Neumann, G.; Cadek, O.

    2010-12-01

    A number of numerical studies of subducting slab assume simplified (constant or only depth-dependent) models of thermal conductivity. The available mineral physics data indicate, however, that thermal diffusivity is strongly temperature- and pressure-dependent and may also vary among different mantle materials. In the present study, we examine the influence of realistic thermal properties of mantle materials on the thermal state of the upper mantle and the dynamics of subducting slabs. On the basis of the data published in mineral physics literature we compile analytical relationships that approximate the pressure and temperature dependence of thermal diffusivity for major mineral phases of the mantle (olivine, wadsleyite, ringwoodite, garnet, clinopyroxenes, stishovite and perovskite). We propose a simplified composition of mineral assemblages predominating in the subducting slab and the surrounding mantle (pyrolite, mid-ocean ridge basalt, harzburgite) and we estimate their thermal diffusivity using the Hashin-Shtrikman bounds. The resulting complex formula for the diffusivity of each aggregate is then approximated by a simpler analytical relationship that is used in our numerical model as an input parameter. For the numerical modeling we use the Elmer software (open source finite element software for multiphysical problems, see http://www.csc.fi/english/pages/elmer). We set up a 2D Cartesian thermo-mechanical steady-state model of a subducting slab. The model is partly kinematic as the flow is driven by a boundary condition on velocity that is prescribed on the top of the subducting lithospheric plate. Reology of the material is non-linear and is coupled with the thermal equation. Using the realistic relationship for thermal diffusivity of mantle materials, we compute the thermal and flow fields for different input velocity and age of the subducting plate and we compare the results against the models assuming a constant thermal diffusivity. The importance of the

  4. Entrepreneurial Education: A Realistic Alternative for Women and Minorities.

    ERIC Educational Resources Information Center

    Steward, James F.; Boyd, Daniel R.

    1989-01-01

    Entrepreneurial education is a valid, realistic occupational training alternative for minorities and women in business. Entrepreneurship requires that one become involved with those educational programs that contribute significantly to one's success. (Author)

  5. Does therapeutic writing help people with long-term conditions? Systematic review, realist synthesis and economic considerations.

    PubMed

    Nyssen, Olga P; Taylor, Stephanie J C; Wong, Geoff; Steed, Elizabeth; Bourke, Liam; Lord, Joanne; Ross, Carol A; Hayman, Sheila; Field, Victoria; Higgins, Ailish; Greenhalgh, Trisha; Meads, Catherine

    2016-04-01

    Writing therapy to improve physical or mental health can take many forms. The most researched model of therapeutic writing (TW) is unfacilitated, individual expressive writing (written emotional disclosure). Facilitated writing activities are less widely researched. Databases, including MEDLINE, EMBASE, PsycINFO, Linguistics and Language Behaviour Abstracts, Allied and Complementary Medicine Database and Cumulative Index to Nursing and Allied Health Literature, were searched from inception to March 2013 (updated January 2015). Four TW practitioners provided expert advice. Study procedures were conducted by one reviewer and checked by a second. Randomised controlled trials (RCTs) and non-randomised comparative studies were included. Quality was appraised using the Cochrane risk-of-bias tool. Unfacilitated and facilitated TW studies were analysed separately under International Classification of Diseases, Tenth Revision chapter headings. Meta-analyses were performed where possible using RevMan version 5.2.6 (RevMan 2012, The Cochrane Collaboration, The Nordic Cochrane Centre, Copenhagen, Denmark). Costs were estimated from a UK NHS perspective and three cost-consequence case studies were prepared. Realist synthesis followed Realist and Meta-narrative Evidence Synthesis: Evolving Standards guidelines. To review the clinical effectiveness and cost-effectiveness of TW for people with long-term conditions (LTCs) compared with no writing, or other controls, reporting any relevant clinical outcomes. To conduct a realist synthesis to understand how TW might work, and for whom. From 14,658 unique citations, 284 full-text papers were reviewed and 64 studies (59 RCTs) were included in the final effectiveness reviews. Five studies examined facilitated TW; these were extremely heterogeneous with unclear or high risk of bias but suggested that facilitated TW interventions may be beneficial in individual LTCs. Unfacilitated expressive writing was examined in 59 studies of variable

  6. [Investigation into the formation of proportions of "realistic thinking vs magical thinking" in paranoid schizophrenia].

    PubMed

    Jarosz, M; Pankiewicz, Z; Buczek, I; Poprawska, I; Rojek, J; Zaborowski, A

    1993-01-01

    Both magical thinking among healthy persons and magical and symbolic thinking in schizophrenia were discussed. The investigation covered 100 paranoid schizophrenics. They also underwent an examination in connection with the formation of the remaining 3 proportions. Both "realistic thinking and magical thinking" scales were used. An ability to think realistically was preserved, to a varying degree, in all patients, with 50% of those examined having shown an explicit or very explicit ability to follow realistic thinking. The above findings deviate from a simplified cognitive model within the discussed range. It was further confirmed that realistic thinking may coexist with magical thinking, and, in some cases, it concerns the same events. That type of disorders of the content of thinking are referred to as magical-realistic interpenetration. The results, and particularly high coefficient of negative correlation within the scales of the examined proportions, confirm the correctness of the assumption that the investigated modes of thinking form an antithetic bipolarity of proportions, aggregating antithetic values, therefore being also complementary.

  7. A realistic treatment of geomagnetic Cherenkov radiation from cosmic ray air showers

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; de Vries, Krijn D.; Scholten, Olaf

    2012-09-01

    We present a macroscopic calculation of coherent electro-magnetic radiation from air showers initiated by ultra-high energy cosmic rays, based on currents obtained from three-dimensional Monte Carlo simulations of air showers in a realistic geo-magnetic field. We discuss the importance of a correct treatment of the index of refraction in air, given by the law of Gladstone and Dale, which affects the pulses enormously for certain configurations, compared to a simplified treatment using a constant index. We predict in particular a geomagnetic Cherenkov radiation, which provides strong signals at high frequencies (GHz), for certain geometries together with "normal radiation" from the shower maximum, leading to a double peak structure in the frequency spectrum. We also provide some information about the numerical procedures referred to as EVA 1.0.

  8. Protocol - realist and meta-narrative evidence synthesis: Evolving Standards (RAMESES)

    PubMed Central

    2011-01-01

    Background There is growing interest in theory-driven, qualitative and mixed-method approaches to systematic review as an alternative to (or to extend and supplement) conventional Cochrane-style reviews. These approaches offer the potential to expand the knowledge base in policy-relevant areas - for example by explaining the success, failure or mixed fortunes of complex interventions. However, the quality of such reviews can be difficult to assess. This study aims to produce methodological guidance, publication standards and training resources for those seeking to use the realist and/or meta-narrative approach to systematic review. Methods/design We will: [a] collate and summarise existing literature on the principles of good practice in realist and meta-narrative systematic review; [b] consider the extent to which these principles have been followed by published and in-progress reviews, thereby identifying how rigour may be lost and how existing methods could be improved; [c] using an online Delphi method with an interdisciplinary panel of experts from academia and policy, produce a draft set of methodological steps and publication standards; [d] produce training materials with learning outcomes linked to these steps; [e] pilot these standards and training materials prospectively on real reviews-in-progress, capturing methodological and other challenges as they arise; [f] synthesise expert input, evidence review and real-time problem analysis into more definitive guidance and standards; [g] disseminate outputs to audiences in academia and policy. The outputs of the study will be threefold: 1. Quality standards and methodological guidance for realist and meta-narrative reviews for use by researchers, research sponsors, students and supervisors 2. A 'RAMESES' (Realist and Meta-review Evidence Synthesis: Evolving Standards) statement (comparable to CONSORT or PRISMA) of publication standards for such reviews, published in an open-access academic journal. 3. A

  9. Realistic micromechanical modeling and simulation of two-phase heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Sreeranganathan, Arun

    This dissertation research focuses on micromechanical modeling and simulations of two-phase heterogeneous materials exhibiting anisotropic and non-uniform microstructures with long-range spatial correlations. Completed work involves development of methodologies for realistic micromechanical analyses of materials using a combination of stereological techniques, two- and three-dimensional digital image processing, and finite element based modeling tools. The methodologies are developed via its applications to two technologically important material systems, namely, discontinuously reinforced aluminum composites containing silicon carbide particles as reinforcement, and boron modified titanium alloys containing in situ formed titanium boride whiskers. Microstructural attributes such as the shape, size, volume fraction, and spatial distribution of the reinforcement phase in these materials were incorporated in the models without any simplifying assumptions. Instrumented indentation was used to determine the constitutive properties of individual microstructural phases. Micromechanical analyses were performed using realistic 2D and 3D models and the results were compared with experimental data. Results indicated that 2D models fail to capture the deformation behavior of these materials and 3D analyses are required for realistic simulations. The effect of clustering of silicon carbide particles and associated porosity on the mechanical response of discontinuously reinforced aluminum composites was investigated using 3D models. Parametric studies were carried out using computer simulated microstructures incorporating realistic microstructural attributes. The intrinsic merit of this research is the development and integration of the required enabling techniques and methodologies for representation, modeling, and simulations of complex geometry of microstructures in two- and three-dimensional space facilitating better understanding of the effects of microstructural geometry

  10. Three-dimensional reconstruction from multiple reflected views within a realist painting: an application to Scott Fraser's "Three way vanitas"

    NASA Astrophysics Data System (ADS)

    Smith, Brandon M.; Stork, David G.; Zhang, Li

    2009-01-01

    The problem of reconstructing a three-dimensional scene from single or multiple views has been thoroughly studied in the computer vision literature, and recently has been applied to problems in the history of art. Criminisi pioneered the application of single-view metrology to reconstructing the fictive spaces in Renaissance paintings, such as the vault in Masaccio's Trinità and the plaza in Piero della Francesca's Flagellazione. While the vast majority of realist paintings provide but a single view, some provide multiple views, through mirrors depicted within their tableaus. The contemporary American realist Scott Fraser's Three way vanitas is a highly realistic still-life containing three mirrors; each mirror provides a new view of the objects in the tableau. We applied multiple-view reconstruction methods to the direct image and the images reflected by these mirrors to reconstruct the three-dimensional tableau. Our methods estimate virtual viewpoints for each view using the geometric constraints provided by the direct view of the mirror frames, along with the reflected images themselves. Moreover, our methods automatically discover inconsistencies between the different views, including ones that might elude careful scrutiny by eye, for example the fact that the height of the water in the glass differs between the direct view and that in the mirror at the right. We believe our work provides the first application of multiple-view reconstruction to a single painting and will have application to other paintings and questions in the history of art.

  11. Estimated Viscosities and Thermal Conductivities of Gases at High Temperatures

    NASA Technical Reports Server (NTRS)

    Svehla, Roger A.

    1962-01-01

    Viscosities and thermal conductivities, suitable for heat-transfer calculations, were estimated for about 200 gases in the ground state from 100 to 5000 K and 1-atmosphere pressure. Free radicals were included, but excited states and ions were not. Calculations for the transport coefficients were based upon the Lennard-Jones (12-6) potential for all gases. This potential was selected because: (1) It is one of the most realistic models available and (2) intermolecular force constants can be estimated from physical properties or by other techniques when experimental data are not available; such methods for estimating force constants are not as readily available for other potentials. When experimental viscosity data were available, they were used to obtain the force constants; otherwise the constants were estimated. These constants were then used to calculate both the viscosities and thermal conductivities tabulated in this report. For thermal conductivities of polyatomic gases an Eucken-type correction was made to correct for exchange between internal and translational energies. Though this correction may be rather poor at low temperatures, it becomes more satisfactory with increasing temperature. It was not possible to obtain force constants from experimental thermal conductivity data except for the inert atoms, because most conductivity data are available at low temperatures only (200 to 400 K), the temperature range where the Eucken correction is probably most in error. However, if the same set of force constants is used for both viscosity and thermal conductivity, there is a large degree of cancellation of error when these properties are used in heat-transfer equations such as the Dittus-Boelter equation. It is therefore concluded that the properties tabulated in this report are suitable for heat-transfer calculations of gaseous systems.

  12. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  13. Radiation-Spray Coupling for Realistic Flow Configurations

    NASA Technical Reports Server (NTRS)

    El-Asrag, Hossam; Iannetti, Anthony C.

    2011-01-01

    Three Large Eddy Simulations (LES) for a lean-direct injection (LDI) combustor are performed and compared. In addition to the cold flow simulation, the effect of radiation coupling with the multi-physics reactive flow is analyzed. The flame let progress variable approach is used as a subgrid combustion model combined with a stochastic subgrid model for spray atomization and an optically thin radiation model. For accurate chemistry modeling, a detailed Jet-A surrogate mechanism is utilized. To achieve realistic inflow, a simple recycling technique is performed at the inflow section upstream of the swirler. Good comparison is shown with the experimental data mean and root mean square profiles. The effect of combustion is found to change the shape and size of the central recirculation zone. Radiation is found to change the spray dynamics and atomization by changing the heat release distribution and the local temperature values impacting the evaporation process. The simulation with radiation modeling shows wider range of droplet size distribution by altering the evaporation rate. The current study proves the importance of radiation modeling for accurate prediction in realistic spray combustion configurations, even for low pressure systems.

  14. Estimating Evaporative Fraction From Readily Obtainable Variables in Mangrove Forests of the Everglades, U.S.A.

    NASA Technical Reports Server (NTRS)

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John; Barr, Jordan

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) the ratio of latent heat (LE; energy equivalent of evapotranspiration -ET-) to total available energy from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micro-meteorological and flux tower observations, or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature [T(sub s)] normalized difference vegetation index (NDVI)and daily maximum air temperature [T(sub a)]. The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using T(sub s) and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the T(sub s) from Landsat relative to the T(sub s) from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  15. Estimating evaporative fraction from readily obtainable variables in mangrove forests of the Everglades, U.S.A.

    USGS Publications Warehouse

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  16. Towards realistic Holocene land cover scenarios: integration of archaeological, palynological and geomorphological records and comparison to global land cover scenarios.

    NASA Astrophysics Data System (ADS)

    De Brue, Hanne; Verstraeten, Gert; Broothaerts, Nils; Notebaert, Bastiaan

    2016-04-01

    Accurate and spatially explicit landscape reconstructions for distinct time periods in human history are essential for the quantification of the effect of anthropogenic land cover changes on, e.g., global biogeochemical cycles, ecology, and geomorphic processes, and to improve our understanding of interaction between humans and the environment in general. A long-term perspective covering Mid and Late Holocene land use changes is recommended in this context, as it provides a baseline to evaluate human impact in more recent periods. Previous efforts to assess the evolution and intensity of agricultural land cover in past centuries or millennia have predominantly focused on palynological records. An increasing number of quantitative techniques has been developed during the last two decades to transfer palynological data to land cover estimates. However, these techniques have to deal with equifinality issues and, furthermore, do not sufficiently allow to reconstruct spatial patterns of past land cover. On the other hand, several continental and global databases of historical anthropogenic land cover changes based on estimates of global population and the required agricultural land per capita have been developed in the past decennium. However, at such long temporal and spatial scales, reconstruction of past anthropogenic land cover intensities and spatial patterns necessarily involves many uncertainties and assumptions as well. Here, we present a novel approach that combines archaeological, palynological and geomorphological data for the Dijle catchment in the central Belgium Loess Belt in order to arrive at more realistic Holocene land cover histories. Multiple land cover scenarios (> 60.000) are constructed using probabilistic rules and used as input into a sediment delivery model (WaTEM/SEDEM). Model outcomes are confronted with a detailed geomorphic dataset on Holocene sediment fluxes and with REVEALS based estimates of vegetation cover using palynological data from

  17. A time-responsive tool for informing policy making: rapid realist review.

    PubMed

    Saul, Jessie E; Willis, Cameron D; Bitz, Jennifer; Best, Allan

    2013-09-05

    A realist synthesis attempts to provide policy makers with a transferable theory that suggests a certain program is more or less likely to work in certain respects, for particular subjects, in specific kinds of situations. Yet realist reviews can require considerable and sustained investment over time, which does not always suit the time-sensitive demands of many policy decisions. 'Rapid Realist Review' methodology (RRR) has been developed as a tool for applying a realist approach to a knowledge synthesis process in order to produce a product that is useful to policy makers in responding to time-sensitive and/or emerging issues, while preserving the core elements of realist methodology. Using examples from completed RRRs, we describe key features of the RRR methodology, the resources required, and the strengths and limitations of the process. All aspects of an RRR are guided by both a local reference group, and a group of content experts. Involvement of knowledge users and external experts ensures both the usability of the review products, as well as their links to current practice. RRRs have proven useful in providing evidence for and making explicit what is known on a given topic, as well as articulating where knowledge gaps may exist. From the RRRs completed to date, findings broadly adhere to four (often overlapping) classifications: guiding rules for policy-making; knowledge quantification (i.e., the amount of literature available that identifies context, mechanisms, and outcomes for a given topic); understanding tensions/paradoxes in the evidence base; and, reinforcing or refuting beliefs and decisions taken. 'Traditional' realist reviews and RRRs have some key differences, which allow policy makers to apply each type of methodology strategically to maximize its utility within a particular local constellation of history, goals, resources, politics and environment. In particular, the RRR methodology is explicitly designed to engage knowledge users and review

  18. Spatio-Temporal Fluctuations of the Earthquake Magnitude Distribution: Robust Estimation and Predictive Power

    NASA Astrophysics Data System (ADS)

    Olsen, S.; Zaliapin, I.

    2008-12-01

    We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquake prediction and seismic hazard assessment methods.

  19. Using GPU parallelization to perform realistic simulations of the LPCTrap experiments

    NASA Astrophysics Data System (ADS)

    Fabian, X.; Mauger, F.; Quéméner, G.; Velten, Ph.; Ban, G.; Couratin, C.; Delahaye, P.; Durand, D.; Fabre, B.; Finlay, P.; Fléchard, X.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pons, B.; Porobic, T.; Severijns, N.; Thomas, J. C.

    2015-11-01

    The LPCTrap setup is a sensitive tool to measure the β - ν angular correlation coefficient, a β ν , which can yield the mixing ratio ρ of a β decay transition. The latter enables the extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V u d . In such a measurement, the most relevant observable is the energy distribution of the recoiling daughter nuclei following the nuclear β decay, which is obtained using a time-of-flight technique. In order to maximize the precision, one can reduce the systematic errors through a thorough simulation of the whole set-up, especially with a correct model of the trapped ion cloud. This paper presents such a simulation package and focuses on the ion cloud features; particular attention is therefore paid to realistic descriptions of trapping field dynamics, buffer gas cooling and the N-body space charge effects.

  20. Upper Limb Posture Estimation in Robotic and Virtual Reality-Based Rehabilitation

    PubMed Central

    Cortés, Camilo; Ardanza, Aitor; Molina-Rueda, F.; Cuesta-Gómez, A.; Ruiz, Oscar E.

    2014-01-01

    New motor rehabilitation therapies include virtual reality (VR) and robotic technologies. In limb rehabilitation, limb posture is required to (1) provide a limb realistic representation in VR games and (2) assess the patient improvement. When exoskeleton devices are used in the therapy, the measurements of their joint angles cannot be directly used to represent the posture of the patient limb, since the human and exoskeleton kinematic models differ. In response to this shortcoming, we propose a method to estimate the posture of the human limb attached to the exoskeleton. We use the exoskeleton joint angles measurements and the constraints of the exoskeleton on the limb to estimate the human limb joints angles. This paper presents (a) the mathematical formulation and solution to the problem, (b) the implementation of the proposed solution on a commercial exoskeleton system for the upper limb rehabilitation, (c) its integration into a rehabilitation VR game platform, and (d) the quantitative assessment of the method during elbow and wrist analytic training. Results show that this method properly estimates the limb posture to (i) animate avatars that represent the patient in VR games and (ii) obtain kinematic data for the patient assessment during elbow and wrist analytic rehabilitation. PMID:25110698

  1. Estimation of the neural drive to the muscle from surface electromyograms

    NASA Astrophysics Data System (ADS)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  2. Standardization in software conversion of (ROM) estimating

    NASA Technical Reports Server (NTRS)

    Roat, G. H.

    1984-01-01

    Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.

  3. Realistic Fireteam Movement in Urban Environments

    DTIC Science & Technology

    2010-10-01

    00-2010 4 . TITLE AND SUBTITLE Realistic Fireteam Movement in Urban Environments 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...is largely consumed by the data transfer from the GPU to the CPU of the color and stencil buffers. Since this operation would only need to be...cost is given in table 4 . Waypoints Mean Std Dev 1112 1.25ms 0.09ms 3785 4.07ms 0.20ms Table 4 : Threat Probability Model update cost (Intel Q6600

  4. Hierarchical Bayesian Approach To Reduce Uncertainty in the Aquatic Effect Assessment of Realistic Chemical Mixtures.

    PubMed

    Oldenkamp, Rik; Hendriks, Harrie W M; van de Meent, Dik; Ragas, Ad M J

    2015-09-01

    Species in the aquatic environment differ in their toxicological sensitivity to the various chemicals they encounter. In aquatic risk assessment, this interspecies variation is often quantified via species sensitivity distributions. Because the information available for the characterization of these distributions is typically limited, optimal use of information is essential to reduce uncertainty involved in the assessment. In the present study, we show that the credibility intervals on the estimated potentially affected fraction of species after exposure to a mixture of chemicals at environmentally relevant surface water concentrations can be extremely wide if a classical approach is followed, in which each chemical in the mixture is considered in isolation. As an alternative, we propose a hierarchical Bayesian approach, in which knowledge on the toxicity of chemicals other than those assessed is incorporated. A case study with a mixture of 13 pharmaceuticals demonstrates that this hierarchical approach results in more realistic estimations of the potentially affected fraction, as a result of reduced uncertainty in species sensitivity distributions for data-poor chemicals.

  5. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these

  6. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related

  7. Hope in Janusz Korczak's Pedagogy of Realistic Idealism

    ERIC Educational Resources Information Center

    Silverman, Marc

    2017-01-01

    This article explores the approach of "Realistic Idealism" to moral education developed by the humanist-progressive moral educator Janusz Korczak, and the role hope plays in it. This pair of terms seems to be an oxymoron. However, their employment is intentional and the article will demonstrate their dialectical interdependence:…

  8. Full Quantum Dynamics Simulation of a Realistic Molecular System Using the Adaptive Time-Dependent Density Matrix Renormalization Group Method.

    PubMed

    Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo

    2018-01-18

    The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.

  9. Age structure and mortality of walleyes in Kansas reservoirs: Use of mortality caps to establish realistic management objectives

    USGS Publications Warehouse

    Quist, M.C.; Stephen, J.L.; Guy, C.S.; Schultz, R.D.

    2004-01-01

    Age structure, total annual mortality, and mortality caps (maximum mortality thresholds established by managers) were investigated for walleye Sander vitreus (formerly Stizostedion vitreum) populations sampled from eight Kansas reservoirs during 1991-1999. We assessed age structure by examining the relative frequency of different ages in the population; total annual mortality of age-2 and older walleyes was estimated by use of a weighted catch curve. To evaluate the utility of mortality caps, we modeled threshold values of mortality by varying growth rates and management objectives. Estimated mortality thresholds were then compared with observed growth and mortality rates. The maximum age of walleyes varied from 5 to 11 years across reservoirs. Age structure was dominated (???72%) by walleyes age 3 and younger in all reservoirs, corresponding to ages that were not yet vulnerable to harvest. Total annual mortality rates varied from 40.7% to 59.5% across reservoirs and averaged 51.1% overall (SE = 2.3). Analysis of mortality caps indicated that a management objective of 500 mm for the mean length of walleyes harvested by anglers was realistic for all reservoirs with a 457-mm minimum length limit but not for those with a 381-mm minimum length limit. For a 500-mm mean length objective to be realized for reservoirs with a 381-mm length limit, managers must either reduce mortality rates (e.g., through restrictive harvest regulations) or increase growth of walleyes. When the assumed objective was to maintain the mean length of harvested walleyes at current levels, the observed annual mortality rates were below the mortality cap for all reservoirs except one. Mortality caps also provided insight on management objectives expressed in terms of proportional stock density (PSD). Results indicated that a PSD objective of 20-40 was realistic for most reservoirs. This study provides important walleye mortality information that can be used for monitoring or for inclusion into

  10. Fracture Toughness Determination of Cracked Chevron Notched Brazilian Disc Rock Specimen via Griffith Energy Criterion Incorporating Realistic Fracture Profiles

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Dai, Feng; Zhao, Tao; Xu, Nu-wen; Liu, Yi

    2016-08-01

    The cracked chevron notched Brazilian disc (CCNBD) specimen has been suggested by the International Society for Rock Mechanics to measure the mode I fracture toughness of rocks, and has been widely adopted in laboratory tests. Nevertheless, a certain discrepancy has been observed in results when compared with those derived from methods using straight through cracked specimens, which might be due to the fact that the fracture profiles of rock specimens cannot match the straight through crack front as assumed in the measuring principle. In this study, the progressive fracturing of the CCNBD specimen is numerically investigated using the discrete element method (DEM), aiming to evaluate the impact of the realistic cracking profiles on the mode I fracture toughness measurements. The obtained results validate the curved fracture fronts throughout the fracture process, as reported in the literature. The fracture toughness is subsequently determined via the proposed G-method originated from Griffith's energy theory, in which the evolution of the realistic fracture profile as well as the accumulated fracture energy is quantified by DEM simulation. A comparison between the numerical tests and the experimental results derived from both the CCNBD and the semi-circular bend (SCB) specimens verifies that the G-method incorporating realistic fracture profiles can contribute to narrowing down the gap between the fracture toughness values measured via the CCNBD and the SCB method.

  11. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  12. Depigmented skin and phantom color measurements for realistic prostheses.

    PubMed

    Tanner, Paul; Leachman, Sancy; Boucher, Kenneth; Ozçelik, Tunçer Burak

    2014-02-01

    The purpose of this study was to test the hypothesis that regardless of human skin phototype, areas of depigmented skin, as seen in vitiligo, are optically indistinguishable among skin phototypes. The average of the depigmented skin measurements can be used to develop the base color of realistic prostheses. Data was analyzed from 20 of 32 recruited vitiligo study participants. Diffuse reflectance spectroscopy measurements were made from depigmented skin and adjacent pigmented skin, then compared with 66 pigmented polydimethylsiloxane phantoms to determine pigment concentrations in turbid media for making realistic facial prostheses. The Area Under spectral intensity Curve (AUC) was calculated for average spectroscopy measurements of pigmented sites in relation to skin phototype (P = 0.0505) and depigmented skin in relation to skin phototype (P = 0.59). No significant relationship exists between skin phototypes and depigmented skin spectroscopy measurements. The average of the depigmented skin measurements (AUC 19,129) was the closest match to phantom 6.4 (AUC 19,162). Areas of depigmented skin are visibly indistinguishable per skin phototype, yet spectrometry shows that depigmented skin measurements varied and were unrelated to skin phototype. Possible sources of optical variation of depigmented skin include age, body site, blood flow, quantity/quality of collagen, and other chromophores. The average of all depigmented skin measurements can be used to derive the pigment composition and concentration for realistic facial prostheses. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    PubMed Central

    Kroes, Thomas; Post, Frits H.; Botha, Charl P.

    2012-01-01

    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292

  14. Bipartite qutrit local realist inequalities and the robustness of their quantum mechanical violation

    NASA Astrophysics Data System (ADS)

    Das, Debarshi; Datta, Shounak; Goswami, Suchetana; Majumdar, A. S.; Home, Dipankar

    2017-10-01

    Distinct from the type of local realist inequality (known as the Collins-Gisin-Linden-Massar-Popescu or CGLMP inequality) usually used for bipartite qutrit systems, we formulate a new set of local realist inequalities for bipartite qutrits by generalizing Wigner's argument that was originally formulated for the bipartite qubit singlet state. This treatment assumes existence of the overall joint probability distributions in the underlying stochastic hidden variable space for the measurement outcomes pertaining to the relevant trichotomic observables, satisfying the locality condition and yielding the measurable marginal probabilities. Such generalized Wigner inequalities (GWI) do not reduce to Bell-CHSH type inequalities by clubbing any two outcomes, and are violated by quantum mechanics (QM) for both the bipartite qutrit isotropic and singlet states using trichotomic observables defined by six-port beam splitter as well as by the spin-1 component observables. The efficacy of GWI is then probed in these cases by comparing the QM violation of GWI with that obtained for the CGLMP inequality. This comparison is done by incorporating white noise in the singlet and isotropic qutrit states. It is found that for the six-port beam splitter observables, QM violation of GWI is more robust than that of the CGLMP inequality for singlet qutrit states, while for isotropic qutrit states, QM violation of the CGLMP inequality is more robust. On the other hand, for the spin-1 component observables, QM violation of GWI is more robust for both the types of states considered.

  15. Helping With All Your Heart: Realistic Heart Stimulus and Compliance With an Organ Donation Request.

    PubMed

    Jacob, Céline; Guéguen, Nicolas

    2015-01-01

    Pictures and images are important aspects in fundraising advertising and could generate more donations. In two experimental studies, we examined the effect of various pictures of hearts on compliance with a request for organ donations. The solicitor wore a white tee shirt where various forms of hearts were printed: symbolic versus realistic (first experiment), none versus symbolic versus realistic (second experiment). Results showed that more compliance was found in the realistic heart experimental condition whereas the symbolic heart form had no significant effect.

  16. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  17. Improved Estimation of Orbits and Physical Properties of Objects in GEO

    NASA Astrophysics Data System (ADS)

    Bradley, B.; Axelrad, P.

    2013-09-01

    Orbital debris is a major concern for satellite operators, both commercial and military. Debris in the geosynchronous (GEO) belt is of particular concern because this unique region is such a valuable, limited resource, and, from the ground we cannot reliably track and characterize GEO objects smaller than 1 meter in diameter. Space-based space surveillance (SBSS) is required to observe GEO objects without weather restriction and with improved viewing geometry. SBSS satellites have thus far been placed in Sun-synchronous orbits. This paper investigates the benefits to GEO orbit determination (including the estimation of mass, area, and shape) that arises from placing observing satellites in geosynchronous transfer orbit (GTO) and a sub-GEO orbit. Recently, several papers have reported on simulation studies to estimate orbits and physical properties; however, these studies use simulated objects and ground-based measurements, often with dense and long data arcs. While this type of simulation provides valuable insight into what is possible, as far as state estimation goes, it is not a very realistic observing scenario and thus may not yield meaningful accuracies. Our research improves upon simulations published to date by utilizing publicly available ephemerides for the WAAS satellites (Anik F1R and Galaxy 15), accurate at the meter level. By simulating and deliberately degrading right ascension and declination observations, consistent with these ephemerides, a realistic assessment of the achievable orbit determination accuracy using GTO and sub-GEO SBSS platforms is performed. Our results show that orbit accuracy is significantly improved as compared to a Sun-synchronous platform. Physical property estimation is also performed using simulated astrometric and photometric data taken from GTO and sub-GEO sensors. Simulations of SBSS-only as well as combined SBSS and ground-based observation tracks are used to study the improvement in area, mass, and shape estimation

  18. Improving atomic displacement and replacement calculations with physically realistic damage models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordlund, Kai; Zinkle, Steven J.; Sand, Andrea E.

    Atomic collision processes are fundamental to numerous advanced materials technologies such as electron microscopy, semiconductor processing and nuclear power generation. Extensive experimental and computer simulation studies over the past several decades provide the physical basis for understanding the atomic-scale processes occurring during primary displacement events. The current international standard for quantifying this energetic particle damage, the Norgett-Robinson-Torrens displacements per atom (NRT-dpa) model, has nowadays several well-known limitations. In particular, the number of radiation defects produced in energetic cascades in metals is only ~1/3 the NRT-dpa prediction, while the number of atoms involved in atomic mixing is about a factor ofmore » 30 larger than the dpa value. Here we propose two new complementary displacement production estimators (athermal recombination corrected dpa, arc-dpa) and atomic mixing (replacements per atom, rpa) functions that extend the NRT-dpa by providing more physically realistic descriptions of primary defect creation in materials and may become additional standard measures for radiation damage quantification.« less

  19. Improving atomic displacement and replacement calculations with physically realistic damage models

    DOE PAGES

    Nordlund, Kai; Zinkle, Steven J.; Sand, Andrea E.; ...

    2018-03-14

    Atomic collision processes are fundamental to numerous advanced materials technologies such as electron microscopy, semiconductor processing and nuclear power generation. Extensive experimental and computer simulation studies over the past several decades provide the physical basis for understanding the atomic-scale processes occurring during primary displacement events. The current international standard for quantifying this energetic particle damage, the Norgett-Robinson-Torrens displacements per atom (NRT-dpa) model, has nowadays several well-known limitations. In particular, the number of radiation defects produced in energetic cascades in metals is only ~1/3 the NRT-dpa prediction, while the number of atoms involved in atomic mixing is about a factor ofmore » 30 larger than the dpa value. Here we propose two new complementary displacement production estimators (athermal recombination corrected dpa, arc-dpa) and atomic mixing (replacements per atom, rpa) functions that extend the NRT-dpa by providing more physically realistic descriptions of primary defect creation in materials and may become additional standard measures for radiation damage quantification.« less

  20. Critical Reflections on Realist Review: Insights from Customizing the Methodology to the Needs of Participatory Research Assessment

    ERIC Educational Resources Information Center

    Jagosh, Justin; Pluye, Pierre; Wong, Geoff; Cargo, Margaret; Salsberg, Jon; Bush, Paula L.; Herbert, Carol P.; Green, Lawrence W.; Greenhalgh, Trish; Macaulay, Ann C.

    2014-01-01

    Realist review has increased in popularity as a methodology for complex intervention assessment. Our experience suggests that the process of designing a realist review requires its customization to areas under investigation. To elaborate on this idea, we first describe the logic underpinning realist review and then present critical reflections on…

  1. Impact of data assimilation on Eulerian versus Lagrangian estimates of upper ocean transport

    NASA Astrophysics Data System (ADS)

    Sperrevik, Ann Kristin; Röhrs, Johannes; Christensen, Kai Hâkon

    2017-07-01

    Using four-dimensional variational analysis, we produce an estimate of the state of a coastal region in Northern Norway during the late winter and spring in 1984. We use satellite sea surface temperature and in situ observations from a series of intensive field campaigns, and obtain a more realistic distribution of water masses both in the horizontal and the vertical than a pure downscaling approach can achieve. Although the distribution of Eulerian surface current speeds are similar, we find that they are more variable and less dependent on model bathymetry in our reanalysis compared to a hindcast produced using the same modeling system. Lagrangian drift currents on the other hand are significantly changed, with overall higher kinetic energy levels in the reanalysis than in the hindcast, particularly in the superinertial frequency band.

  2. Student Work Experience: A Realistic Approach to Merchandising Education.

    ERIC Educational Resources Information Center

    Horridge, Patricia; And Others

    1980-01-01

    Relevant and realistic experiences are needed to prepare the student for a future career. Addresses the results of a survey of colleges and universities in the United States in regard to their student work experience (SWE) in fashion merchandising. (Author)

  3. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  4. Teaching Learning Disabled Adolescents to Set Realistic Goals.

    ERIC Educational Resources Information Center

    Tollefson, Nona; And Others

    Sixty-one learning disabled (LD) adolescents in four junior high schools were randomly assigned to experimental or control groups as part of an effort to teach LD students to set realistic goals so they might experience success and satisfaction in school. Ss in the experimental group made achievement contracts and predicted their performance in…

  5. Accurate determinations of alpha(s) from realistic lattice QCD.

    PubMed

    Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J

    2005-07-29

    We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).

  6. Decoding tactile afferent activity to obtain an estimate of instantaneous force and torque applied to the fingerpad

    PubMed Central

    Birznieks, Ingvars; Redmond, Stephen J.

    2015-01-01

    Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866

  7. Realistic Radio Communications in Pilot Simulator Training

    NASA Technical Reports Server (NTRS)

    Burki-Cohen, Judith; Kendra, Andrew J.; Kanki, Barbara G.; Lee, Alfred T.

    2000-01-01

    Simulators used for total training and evaluation of airline pilots must satisfy stringent criteria in order to assure their adequacy for training and checking maneuvers. Air traffic control and company radio communications simulation, however, may still be left to role-play by the already taxed instructor/evaluators in spite of their central importance in every aspect of the flight environment. The underlying premise of this research is that providing a realistic radio communications environment would increase safety by enhancing pilot training and evaluation. This report summarizes the first-year efforts of assessing the requirement and feasibility of simulating radio communications automatically. A review of the training and crew resource/task management literature showed both practical and theoretical support for the need for realistic radio communications simulation. A survey of 29 instructor/evaluators from 14 airlines revealed that radio communications are mainly role-played by the instructor/evaluators. This increases instructor/evaluators' own workload while unrealistically lowering pilot communications load compared to actual operations, with a concomitant loss in training/evaluation effectiveness. A technology review searching for an automated means of providing radio communications to and from aircraft with minimal human effort showed that while promising, the technology is still immature. Further research and the need for establishing a proof-of-concept are also discussed.

  8. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the

  9. The application of parameter estimation to flight measurements to obtain lateral-directional stability derivatives of an augmented jet-flap STOL airplane

    NASA Technical Reports Server (NTRS)

    Stephenson, J. D.

    1983-01-01

    Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.

  10. An algorithm for the estimation of bounds on the emissivity and temperatures from thermal multispectral airborne remotely sensed data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, D.; Baskin, R.

    1992-01-01

    The effective flux incident upon the detectors of a thermal sensor, after it has been corrected for atmospheric effects, is a function of a non-linear combination of the emissivity of the target for that channel and the temperature of the target. The sensor system cannot separate the contribution from the emissivity and the temperature that constitute the flux value. A method that estimates the bounds on these temperatures and emissivities from thermal data is described. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) - a 6 channel thermal sensor. Since this is an under-determined set of equations i.e. there are 7 unknowns (6 emissivities and 1 temperature) and 6 equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. Using some realistic bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00 respectively. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected with the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. Ground truth temperatures using thermometers and radiometers were also obtained over an area of the reservoir. The results of two independent runs of the radiometer data averaged

  11. Seismic shaking scenarios in realistic 3D crustal model of Northern Italy

    NASA Astrophysics Data System (ADS)

    Molinari, I.; Morelli, A.; Basini, P.; Berbellini, A.

    2013-12-01

    Simulation of seismic wave propagation in realistic crustal structures is a fundamental tool to evaluate earthquake-generated ground shaking and assess seismic hazard. Current-generation numerical codes, and modern HPC infrastructures, allow for realistic simulations in complex 3D geologic structures. We apply such methodology to the Po Plain in Northern Italy -- a region with relatively rare earthquakes but having large property and industrial exposure, as it became clear during the two M~6 events of May 20-29, 2012. Historical seismicity is well known in this region, with maximum magnitudes estimates reaching M~7, and wave field amplitudes may be significantly amplified by the presence of the very thick sedimentary basin. Our goal is to produce estimates of expected ground shaking in Northern Italy through detailed deterministic simulations of ground motion due to expected earthquakes. We defined a three-dimensional model of the earth's crust using geo-statistical tools to merge the abundant information existing in the form of borehole data and seismic reflection profiles that had been shot in the '70s and the '80s for hydrocarbon exploration. Such information, that has been used by geologists to infer the deep structural setup, had never been merged to build a 3D model to be used for seismological simulations. We implement the model in SPECFEM3D_Cartesian and a hexahedral mesh with elements of ~2km, that allows us to simulate waves with minimum period of ~2 seconds. The model has then been optimized through comparison between simulated and recorded seismograms for the ~20 moderate-magnitude events (Mw > 4.5) that have been instrumentally recorded in the last 15 years. Realistic simulations in the frequency band of most common engineering relevance -- say, ~1 Hz -- at such a large scale would require an extremely detailed structural model, currently not available, and prohibitive computational resources. However, an interest is growing in longer period ground

  12. Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.

    PubMed

    Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller

    2015-01-01

    An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.

  13. Species traits outweigh nested structure in driving the effects of realistic biodiversity loss on productivity.

    PubMed

    Wolfi, Amelia A; Zavaleta, Erika S

    2015-01-01

    While most studies of the relationship between biodiversity and ecosystem functioning have examined randomized diversity losses, several recent experiments have employed nested, realistic designs and found that realistic species losses had larger consequences than random losses for ecosystem functioning. Progressive, realistic, biodiversity losses are generally strongly nested, but this nestedness is a potentially confounding effect. Here, we address whether nonrandom trait loss or degree of nestedness drives the relationship between diversity and productivity in a realistic biodiversity-loss experiment. We isolated the effect of nestedness through post hoc analyses of data from an experimental biodiversity manipulation in a California serpentine grassland. We found that the order in which plant traits are lost as diversity declines influences the diversity-productivity relationship more than the degree of nestedness does. Understanding the relationship between the expected order of species loss and functional traits is becoming increasingly important in the face of ongoing biodiversity loss worldwide. Our findings illustrate the importance of species composition and the order of species loss, rather than nestedness per se, for understanding the mechanisms underlying the effects of realistic species losses on ecosystem functioning.

  14. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in

  15. Rethinking Mathematics Teaching in Liberia: Realistic Mathematics Education

    ERIC Educational Resources Information Center

    Stemn, Blidi S.

    2017-01-01

    In some African cultures, the concept of division does not necessarily mean sharing money or an item equally. How an item is shared might depend on the ages of the individuals involved. This article describes the use of the Realistic Mathematics Education (RME) approach to teach division word problems involving money in a 3rd-grade class in…

  16. Engendering Anthropocentrism: Lessons from Children's Realistic Animal Stories.

    ERIC Educational Resources Information Center

    Johnson, Kathleen R.

    In children's realistic stories about animals a number of wholly and unambiguously anthropocentric assumptions are at work. For instance, in a study most of the books (81%) in one sampling of 50 stories involve a pet or the process of domesticating a wild animal. In most cases the primary animal character is a dog or horse. The predominance of…

  17. Construction of hexahedral finite element mesh capturing realistic geometries of a petroleum reserve

    DOE PAGES

    Park, Byoung Yoon; Roberts, Barry L.; Sobolik, Steven R.

    2017-07-27

    The three-dimensional finite element mesh capturing realistic geometries of the Bayou Choctaw site has been constructed using the sonar and seismic survey data obtained from the field. The mesh consists of hexahedral elements because the salt constitutive model is coded using hexahedral elements. Various ideas and techniques to construct finite element mesh capturing artificially and naturally formed geometries are provided. The techniques to reduce the number of elements as much as possible to save on computer run time while maintaining the computational accuracy is also introduced. The steps and methodologies could be applied to construct the meshes of Big Hill,more » Bryan Mound, and West Hackberry strategic petroleum reserve sites. The methodology could be applied to the complicated shape masses for various civil and geological structures.« less

  18. Construction of hexahedral finite element mesh capturing realistic geometries of a petroleum reserve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Byoung Yoon; Roberts, Barry L.; Sobolik, Steven R.

    The three-dimensional finite element mesh capturing realistic geometries of the Bayou Choctaw site has been constructed using the sonar and seismic survey data obtained from the field. The mesh consists of hexahedral elements because the salt constitutive model is coded using hexahedral elements. Various ideas and techniques to construct finite element mesh capturing artificially and naturally formed geometries are provided. The techniques to reduce the number of elements as much as possible to save on computer run time while maintaining the computational accuracy is also introduced. The steps and methodologies could be applied to construct the meshes of Big Hill,more » Bryan Mound, and West Hackberry strategic petroleum reserve sites. The methodology could be applied to the complicated shape masses for various civil and geological structures.« less

  19. New Fetal Dose Estimates from 18F-FDG Administered During Pregnancy: Standardization of Dose Calculations and Estimations with Voxel-Based Anthropomorphic Phantoms.

    PubMed

    Zanotti-Fregonara, Paolo; Chastan, Mathieu; Edet-Sanson, Agathe; Ekmekcioglu, Ozgul; Erdogan, Ezgi Basak; Hapdey, Sebastien; Hindie, Elif; Stabin, Michael G

    2016-11-01

    Data from the literature show that the fetal absorbed dose from 18 F-FDG administration to the pregnant mother ranges from 0.5E-2 to 4E-2 mGy/MBq. These figures were, however, obtained using different quantification techniques and with basic geometric anthropomorphic phantoms. The aim of this study was to refine the fetal dose estimates of published as well as new cases using realistic voxel-based phantoms. The 18 F-FDG doses to the fetus (n = 19; 5-34 wk of pregnancy) were calculated with new voxel-based anthropomorphic phantoms of the pregnant woman. The image-derived fetal time-integrated activity values were combined with those of the mothers' organs from the International Commission on Radiological Protection publication 106 and the dynamic bladder model with a 1-h bladder-voiding interval. The dose to the uterus was used as a proxy for early pregnancy (up to 10 wk). The time-integrated activities were entered into OLINDA/EXM 1.1 to derive the dose with the classic anthropomorphic phantoms of pregnant women, then into OLINDA/EXM 2.0 to assess the dose using new voxel-based phantoms. The average fetal doses (mGy/MBq) with OLINDA/EXM 2.0 were 2.5E-02 in early pregnancy, 1.3E-02 in the late part of the first trimester, 8.5E-03 in the second trimester, and 5.1E-03 in the third trimester. The differences compared with the doses calculated with OLINDA/EXM 1.1 were +7%, +70%, +35%, and -8%, respectively. Except in late pregnancy, the doses estimated with realistic voxelwise anthropomorphic phantoms are higher than the doses derived from old geometric phantoms. The doses remain, however, well below the threshold for any deterministic effects. Thus, pregnancy is not an absolute contraindication of a clinically justified 18 F-FDG PET scan. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  20. Estimation of cardiac motion in cine-MRI sequences by correlation transform optical flow of monogenic features distance

    NASA Astrophysics Data System (ADS)

    Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick

    2016-12-01

    Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.

  1. Demonstrating a Realistic IP Mission Prototype

    NASA Technical Reports Server (NTRS)

    Rash, James; Ferrer, Arturo B.; Goodman, Nancy; Ghazi-Tehrani, Samira; Polk, Joe; Johnson, Lorin; Menke, Greg; Miller, Bill; Criscuolo, Ed; Hogie, Keith

    2003-01-01

    Flight software and hardware and realistic space communications environments were elements of recent demonstrations of the Internet Protocol (IP) mission concept in the lab. The Operating Missions as Nodes on the Internet (OMNI) Project and the Flight Software Branch at NASA/GSFC collaborated to build the prototype of a representative space mission that employed unmodified off-the-shelf Internet protocols and technologies for end-to-end communications between the spacecraft/instruments and the ground system/users. The realistic elements used in the prototype included an RF communications link simulator and components of the TRIANA mission flight software and ground support system. A web-enabled camera connected to the spacecraft computer via an Ethernet LAN represented an on-board instrument creating image data. In addition to the protocols at the link layer (HDLC), transport layer (UDP, TCP), and network (IP) layer, a reliable file delivery protocol (MDP) at the application layer enabled reliable data delivery both to and from the spacecraft. The standard Network Time Protocol (NTP) performed on-board clock synchronization with a ground time standard. The demonstrations of the prototype mission illustrated some of the advantages of using Internet standards and technologies for space missions, but also helped identify issues that must be addressed. These issues include applicability to embedded real-time systems on flight-qualified hardware, range of applicability of TCP, and liability for and maintenance of commercial off-the-shelf (COTS) products. The NASA Earth Science Technology Office (ESTO) funded the collaboration to build and demonstrate the prototype IP mission.

  2. Towards realistic string vacua from branes at singularities

    NASA Astrophysics Data System (ADS)

    Conlon, Joseph P.; Maharana, Anshuman; Quevedo, Fernando

    2009-05-01

    We report on progress towards constructing string models incorporating both realistic D-brane matter content and moduli stabilisation with dynamical low-scale supersymmetry breaking. The general framework is that of local D-brane models embedded into the LARGE volume approach to moduli stabilisation. We review quiver theories on del Pezzo n (dPn) singularities including both D3 and D7 branes. We provide supersymmetric examples with three quark/lepton families and the gauge symmetries of the Standard, Left-Right Symmetric, Pati-Salam and Trinification models, without unwanted chiral exotics. We describe how the singularity structure leads to family symmetries governing the Yukawa couplings which may give mass hierarchies among the different generations. We outline how these models can be embedded into compact Calabi-Yau compactifications with LARGE volume moduli stabilisation, and state the minimal conditions for this to be possible. We study the general structure of soft supersymmetry breaking. At the singularity all leading order contributions to the soft terms (both gravity- and anomaly-mediation) vanish. We enumerate subleading contributions and estimate their magnitude. We also describe model-independent physical implications of this scenario. These include the masses of anomalous and non-anomalous U(1)'s and the generic existence of a new hyperweak force under which leptons and/or quarks could be charged. We propose that such a gauge boson could be responsible for the ghost muon anomaly recently found at the Tevatron's CDF detector.

  3. Experimental test of nonlocal realistic theories without the rotational symmetry assumption.

    PubMed

    Paterek, Tomasz; Fedrizzi, Alessandro; Gröblacher, Simon; Jennewein, Thomas; Zukowski, Marek; Aspelmeyer, Markus; Zeilinger, Anton

    2007-11-23

    We analyze the class of nonlocal realistic theories that was originally considered by Leggett [Found. Phys. 33, 1469 (2003)10.1023/A:1026096313729] and tested by us in a recent experiment [Nature (London) 446, 871 (2007)10.1038/nature05677]. We derive an incompatibility theorem that works for finite numbers of polarizer settings and that does not require the previously assumed rotational symmetry of the two-particle correlation functions. The experimentally measured case involves seven different measurement settings. Using polarization-entangled photon pairs, we exclude this broader class of nonlocal realistic models by experimentally violating a new Leggett-type inequality by 80 standard deviations.

  4. Estimating secular velocities from GPS data contaminated by postseismic motion at sites with limited pre-earthquake data

    NASA Astrophysics Data System (ADS)

    Murray, J. R.; Svarc, J. L.

    2016-12-01

    Constant secular velocities estimated from Global Positioning System (GPS)-derived position time series are a central input for modeling interseismic deformation in seismically active regions. Both postseismic motion and temporally correlated noise produce long-period signals that are difficult to separate from secular motion and can bias velocity estimates. For GPS sites installed post-earthquake it is especially challenging to uniquely estimate velocities and postseismic signals and to determine when the postseismic transient has decayed sufficiently to enable use of subsequent data for estimating secular rates. Within 60 km of the 2003 M6.5 San Simeon and 2004 M6 Parkfield earthquakes in California, 16 continuous GPS sites (group 1) were established prior to mid-2001, and 52 stations (group 2) were installed following the events. We use group 1 data to investigate how early in the post-earthquake time period one may reliably begin using group 2 data to estimate velocities. For each group 1 time series, we obtain eight velocity estimates using observation time windows with successively later start dates (2006 - 2013) and a parameterization that includes constant velocity, annual, and semi-annual terms but no postseismic decay. We compare these to velocities estimated using only pre-San Simeon data to find when the pre- and post-earthquake velocities match within uncertainties. To obtain realistic velocity uncertainties, for each time series we optimize a temporally correlated noise model consisting of white, flicker, random walk, and, in some cases, band-pass filtered noise contributions. Preliminary results suggest velocities can be reliably estimated using data from 2011 to the present. Ongoing work will assess velocity bias as a function of epicentral distance and length of post-earthquake time series as well as explore spatio-temporal filtering of detrended group 1 time series to provide empirical corrections for postseismic motion in group 2 time series.

  5. International Management: Creating a More Realistic Global Planning Environment.

    ERIC Educational Resources Information Center

    Waldron, Darryl G.

    2000-01-01

    Discusses the need for realistic global planning environments in international business education, introducing a strategic planning model that has teams interacting with teams to strategically analyze a selected multinational company. This dynamic process must result in a single integrated written analysis that specifies an optimal strategy for…

  6. Conjugate problems of transport phenomena under quasi-steady microaccelerations in realistic spaceflight.

    PubMed

    Polezhaev, V I; Nikitin, S A

    2009-04-01

    A new model for spatial convective transport processes conjugated with the measured or calculated realistic quasi-steady microaccelerations is presented. Rotation around the mass center, including accelerated rotation, gravity gradient, and aerodynamical drag are taken into account. New results of the effect on mixing and concentration inhomogeneities of the elementary convective processes are presented. The mixing problem in spacecraft enclosures, concentration inhomogeneities due to convection induced by body forces in realistic spaceflight, and the coupling of this kind of convection with thermocapillary convection on the basis of this model are discussed.

  7. Realistic full wave modeling of focal plane array pixels

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.; ...

    2017-11-01

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  8. Neuronize: a tool for building realistic neuronal cell morphologies

    PubMed Central

    Brito, Juan P.; Mata, Susana; Bayona, Sofia; Pastor, Luis; DeFelipe, Javier; Benavides-Piccione, Ruth

    2013-01-01

    This study presents a tool, Neuronize, for building realistic three-dimensional models of neuronal cells from the morphological information extracted through computer-aided tracing applications. Neuronize consists of a set of methods designed to build 3D neural meshes that approximate the cell membrane at different resolution levels, allowing a balance to be reached between the complexity and the quality of the final model. The main contribution of the present study is the proposal of a novel approach to build a realistic and accurate 3D shape of the soma from the incomplete information stored in the digitally traced neuron, which usually consists of a 2D cell body contour. This technique is based on the deformation of an initial shape driven by the position and thickness of the first order dendrites. The addition of a set of spines along the dendrites completes the model, building a final 3D neuronal cell suitable for its visualization in a wide range of 3D environments. PMID:23761740

  9. Neuronize: a tool for building realistic neuronal cell morphologies.

    PubMed

    Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis; Defelipe, Javier; Benavides-Piccione, Ruth

    2013-01-01

    This study presents a tool, Neuronize, for building realistic three-dimensional models of neuronal cells from the morphological information extracted through computer-aided tracing applications. Neuronize consists of a set of methods designed to build 3D neural meshes that approximate the cell membrane at different resolution levels, allowing a balance to be reached between the complexity and the quality of the final model. The main contribution of the present study is the proposal of a novel approach to build a realistic and accurate 3D shape of the soma from the incomplete information stored in the digitally traced neuron, which usually consists of a 2D cell body contour. This technique is based on the deformation of an initial shape driven by the position and thickness of the first order dendrites. The addition of a set of spines along the dendrites completes the model, building a final 3D neuronal cell suitable for its visualization in a wide range of 3D environments.

  10. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colwell, F.S.; Crawford, R.L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are othermore » examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers.« less

  11. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  12. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Understanding how appraisal of doctors produces its effects: a realist review protocol.

    PubMed

    Brennan, Nicola; Bryce, Marie; Pearson, Mark; Wong, Geoff; Cooper, Chris; Archer, Julian

    2014-06-23

    UK doctors are now required to participate in revalidation to maintain their licence to practise. Appraisal is a fundamental component of revalidation. However, objective evidence of appraisal changing doctors' behaviour and directly resulting in improved patient care is limited. In particular, it is not clear how the process of appraisal is supposed to change doctors' behaviour and improve clinical performance. The aim of this research is to understand how and why appraisal of doctors is supposed to produce its effect. Realist review is a theory-driven interpretive approach to evidence synthesis. It applies realist logic of inquiry to produce an explanatory analysis of an intervention that is, what works, for whom, in what circumstances, in what respects. Using a realist review approach, an initial programme theory of appraisal will be developed by consulting with key stakeholders in doctors' appraisal in expert panels (ethical approval is not required), and by searching the literature to identify relevant existing theories. The search strategy will have a number of phases including a combination of: (1) electronic database searching, for example, EMBASE, MEDLINE, the Cochrane Library, ASSIA, (2) 'cited by' articles search, (3) citation searching, (4) contacting authors and (5) grey literature searching. The search for evidence will be iteratively extended and refocused as the review progresses. Studies will be included based on their ability to provide data that enable testing of the programme theory. Data extraction will be conducted, for example, by note taking and annotation at different review stages as is consistent with the realist approach. The evidence will be synthesised using realist logic to interrogate the final programme theory of the impact of appraisal on doctors' performance. The synthesis results will be written up according to RAMESES guidelines and disseminated through peer-reviewed publication and presentations. The protocol is registered with

  14. Understanding how appraisal of doctors produces its effects: a realist review protocol

    PubMed Central

    Brennan, Nicola; Bryce, Marie; Pearson, Mark; Wong, Geoff; Cooper, Chris; Archer, Julian

    2014-01-01

    Introduction UK doctors are now required to participate in revalidation to maintain their licence to practise. Appraisal is a fundamental component of revalidation. However, objective evidence of appraisal changing doctors’ behaviour and directly resulting in improved patient care is limited. In particular, it is not clear how the process of appraisal is supposed to change doctors’ behaviour and improve clinical performance. The aim of this research is to understand how and why appraisal of doctors is supposed to produce its effect. Methods and analysis Realist review is a theory-driven interpretive approach to evidence synthesis. It applies realist logic of inquiry to produce an explanatory analysis of an intervention that is, what works, for whom, in what circumstances, in what respects. Using a realist review approach, an initial programme theory of appraisal will be developed by consulting with key stakeholders in doctors’ appraisal in expert panels (ethical approval is not required), and by searching the literature to identify relevant existing theories. The search strategy will have a number of phases including a combination of: (1) electronic database searching, for example, EMBASE, MEDLINE, the Cochrane Library, ASSIA, (2) ‘cited by’ articles search, (3) citation searching, (4) contacting authors and (5) grey literature searching. The search for evidence will be iteratively extended and refocused as the review progresses. Studies will be included based on their ability to provide data that enable testing of the programme theory. Data extraction will be conducted, for example, by note taking and annotation at different review stages as is consistent with the realist approach. The evidence will be synthesised using realist logic to interrogate the final programme theory of the impact of appraisal on doctors’ performance. The synthesis results will be written up according to RAMESES guidelines and disseminated through peer-reviewed publication and

  15. Simulating the value of electric-vehicle-grid integration using a behaviourally realistic model

    NASA Astrophysics Data System (ADS)

    Wolinetz, Michael; Axsen, Jonn; Peters, Jotham; Crawford, Curran

    2018-02-01

    Vehicle-grid integration (VGI) uses the interaction between electric vehicles and the electrical grid to provide benefits that may include reducing the cost of using intermittent renwable electricity or providing a financial incentive for electric vehicle ownerhip. However, studies that estimate the value of VGI benefits have largely ignored how consumer behaviour will affect the magnitude of the impact. Here, we simulate the long-term impact of VGI using behaviourally realistic and empirically derived models of vehicle adoption and charging combined with an electricity system model. We focus on the case where a central entity manages the charging rate and timing for participating electric vehicles. VGI is found not to increase the adoption of electric vehicles, but does have a a small beneficial impact on electricity prices. By 2050, VGI reduces wholesale electricity prices by 0.6-0.7% (0.7 MWh-1, 2010 CAD) relative to an equivalent scenario without VGI. Excluding consumer behaviour from the analysis inflates the value of VGI.

  16. A GRASS GIS module to obtain an estimation of glacier behavior under climate change: A pilot study on Italian glacier

    NASA Astrophysics Data System (ADS)

    Strigaro, Daniele; Moretti, Massimiliano; Mattavelli, Matteo; Frigerio, Ivan; Amicis, Mattia De; Maggi, Valter

    2016-09-01

    The aim of this work is to integrate the Minimal Glacier Model in a Geographic Information System Python module in order to obtain spatial simulations of glacier retreat and to assess the future scenarios with a spatial representation. The Minimal Glacier Models are a simple yet effective way of estimating glacier response to climate fluctuations. This module can be useful for the scientific and glaciological community in order to evaluate glacier behavior, driven by climate forcing. The module, called r.glacio.model, is developed in a GRASS GIS (GRASS Development Team, 2016) environment using Python programming language combined with different libraries as GDAL, OGR, CSV, math, etc. The module is applied and validated on the Rutor glacier, a glacier in the south-western region of the Italian Alps. This glacier is very large in size and features rather regular and lively dynamics. The simulation is calibrated by reconstructing the 3-dimensional dynamics flow line and analyzing the difference between the simulated flow line length variations and the observed glacier fronts coming from ortophotos and DEMs. These simulations are driven by the past mass balance record. Afterwards, the future assessment is estimated by using climatic drivers provided by a set of General Circulation Models participating in the Climate Model Inter-comparison Project 5 effort. The approach devised in r.glacio.model can be applied to most alpine glaciers to obtain a first-order spatial representation of glacier behavior under climate change.

  17. Critical Realist Review: Exploring the Real, beyond the Empirical

    ERIC Educational Resources Information Center

    Edgley, Alison; Stickley, Theodore; Timmons, Stephen; Meal, Andy

    2016-01-01

    This article defines the "critical realist review", a literature-based methodological approach to critical analysis of health care studies (or any discipline charged with social interventions) that is robust, insightful and essential for the complexities of twenty-first century evidence-based health and social care. We argue that this…

  18. Realist Evaluation: An Emerging Theory in Support of Practice.

    ERIC Educational Resources Information Center

    Henry, Gary T., Ed.; Julnes, George, Ed.; Mark, Melvin M., Ed.

    1998-01-01

    The five articles of this sourcebook, organized around the five-component framework for evaluation described by W. Shadish, T. Cook, and L. Leviton (1991), present a new theory of realist evaluation that captures the sensemaking contributions of postpositivism and the sensitivity to values from the constructivist traditions. (SLD)

  19. Improving Mathematics Teaching in Kindergarten with Realistic Mathematical Education

    ERIC Educational Resources Information Center

    Papadakis, Stamatios; Kalogiannakis, Michail; Zaranis, Nicholas

    2017-01-01

    The present study investigates and compares the influence of teaching Realistic Mathematics on the development of mathematical competence in kindergarten. The sample consisted of 231 Greek kindergarten students. For the implementation of the survey, we conducted an intervention, which included one experimental and one control group. Children in…

  20. A conformally flat realistic anisotropic model for a compact star

    NASA Astrophysics Data System (ADS)

    Ivanov, B. V.

    2018-04-01

    A physically realistic stellar model with a simple expression for the energy density and conformally flat interior is found. The relations between the different conditions are used without graphic proofs. It may represent a real pulsar.

  1. The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example.

    PubMed

    Jamal, Farah; Fletcher, Adam; Shackleton, Nichola; Elbourne, Diana; Viner, Russell; Bonell, Chris

    2015-10-15

    Randomised controlled trials (RCTs) of social interventions are often criticised as failing to open the 'black box' whereby they only address questions about 'what works' without explaining the underlying processes of implementation and mechanisms of action, and how these vary by contextual characteristics of person and place. Realist RCTs are proposed as an approach to evaluation science that addresses these gaps while preserving the strengths of RCTs in providing evidence with strong internal validity in estimating effects. In the context of growing interest in designing and conducting realist trials, there is an urgent need to offer a worked example to provide guidance on how such an approach might be practically taken forward. The aim of this paper is to outline a three-staged theoretical and methodological process of undertaking a realist RCT using the example of the evaluation of a whole-school restorative intervention aiming to reduce aggression and bullying in English secondary schools. First, informed by the findings of our initial pilot trial and sociological theory, we elaborate our theory of change and specific a priori hypotheses about how intervention mechanisms interact with context to produce outcomes. Second, we describe how we will use emerging findings from the integral process evaluation within the RCT to refine, and add to, these a priori hypotheses before the collection of quantitative, follow-up data. Third, we will test our hypotheses using a combination of process and outcome data via quantitative analyses of effect mediation (examining mechanisms) and moderation (examining contextual contingencies). The results are then used to refine and further develop the theory of change. The aim of the realist RCT approach is thus not merely to assess whether the intervention is effective or not, but to develop empirically informed mid-range theory through a three-stage process. There are important implications for those involved with reporting and

  2. Protocol—the RAMESES II study: developing guidance and reporting standards for realist evaluation

    PubMed Central

    Greenhalgh, Trisha; Wong, Geoff; Jagosh, Justin; Greenhalgh, Joanne; Manzano, Ana; Westhorp, Gill; Pawson, Ray

    2015-01-01

    Introduction Realist evaluation is an increasingly popular methodology in health services research. For realist evaluations (RE) this project aims to: develop quality and reporting standards and training materials; build capacity for undertaking and critically evaluating them; produce resources and training materials for lay participants, and those seeking to involve them. Methods To achieve our aims, we will: (1) Establish management and governance infrastructure; (2) Recruit an interdisciplinary Delphi panel of 35 participants with diverse relevant experience of RE; (3) Summarise current literature and expert opinion on best practice in RE; (4) Run an online Delphi panel to generate and refine items for quality and reporting standards; (5) Capture ‘real world’ experiences and challenges of RE—for example, by providing ongoing support to realist evaluations, hosting the RAMESES JISCmail list on realist research, and feeding problems and insights from these into the deliberations of the Delphi panel; (6) Produce quality and reporting standards; (7) Collate examples of the learning and training needs of researchers, students, reviewers and lay members in relation to RE; (8) Develop, deliver and evaluate training materials for RE and deliver training workshops; and (9) Develop and evaluate information and resources for patients and other lay participants in RE (eg, draft template information sheets and model consent forms) and; (10) Disseminate training materials and other resources. Planned outputs: (1) Quality and reporting standards and training materials for RE. (2) Methodological support for RE. (3) Increase in capacity to support and evaluate RE. (4) Accessible, plain-English resources for patients and the public participating in RE. Discussion The realist evaluation is a relatively new approach to evaluation and its overall place in the is not yet fully established. As with all primary research approaches, guidance on quality assurance and uniform

  3. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  4. Realistic Gamow shell model for resonance and continuum in atomic nuclei

    NASA Astrophysics Data System (ADS)

    Xu, F. R.; Sun, Z. H.; Wu, Q.; Hu, B. S.; Dai, S. J.

    2018-02-01

    The Gamow shell model can describe resonance and continuum for atomic nuclei. The model is established in the complex-moment (complex-k) plane of the Berggren coordinates in which bound, resonant and continuum states are treated on equal footing self-consistently. In the present work, the realistic nuclear force, CD Bonn, has been used. We have developed the full \\hat{Q}-box folded-diagram method to derive the realistic effective interaction in the model space which is nondegenerate and contains resonance and continuum channels. The CD-Bonn potential is renormalized using the V low-k method. With choosing 16O as the inert core, we have applied the Gamow shell model to oxygen isotopes.

  5. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging

    PubMed Central

    Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A

    2014-01-01

    Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial

  6. Magical Realist Pathways into and under the Psychotherapeutic Imaginary

    ERIC Educational Resources Information Center

    Speedy, Jane

    2011-01-01

    My experience of people's life stories from my work as a narrative therapist consistently destabilised distinctions between imagined/magical and real experiences. I came to realise that the day-to-day magical realist juxtapositions I came upon were encounters with people's daily lives, as lived, that have remained unacknowledged within the…

  7. Creating photo-realistic works in a 3D scene using layers styles to create an animation

    NASA Astrophysics Data System (ADS)

    Avramescu, A. M.

    2015-11-01

    Creating realist objects in a 3D scene is not an easy work. We have to be very careful to make the creation very detailed. If we don't know how to make these photo-realistic works, by using the techniques and a good reference photo we can create an amazing amount of detail and realism. For example, in this article there are some of these detailed methods from which we can learn the techniques necessary to make beautiful and realistic objects in a scene. More precisely, in this paper, we present how to create a 3D animated scene, mainly using the Pen Tool and Blending Options. Indeed, this work is based on teaching some simple ways of using the Layer Styles to create some great shadows, lights, textures and a realistic sense of 3 Dimension. The present work involves also showing how some interesting ways of using the illuminating and rendering options can create a realistic effect in a scene. Moreover, this article shows how to create photo realistic 3D models from a digital image. The present work proposes to present how to use Illustrator paths, texturing, basic lighting and rendering, how to apply textures and how to parent the building and objects components. We also propose to use this proposition to recreate smaller details or 3D objects from a 2D image. After a critic art stage, we are able now to present in this paper the architecture of a design method that proposes to create an animation. The aim is to create a conceptual and methodological tutorial to address this issue both scientifically and in practice. This objective also includes proposing, on strong scientific basis, a model that gives the possibility of a better understanding of the techniques necessary to create a realistic animation.

  8. Modeling of ultrasonic wave propagation in composite laminates with realistic discontinuity representation.

    PubMed

    Zelenyak, Andreea-Manuela; Schorer, Nora; Sause, Markus G R

    2018-02-01

    This paper presents a method for embedding realistic defect geometries of a fiber reinforced material in a finite element modeling environment in order to simulate active ultrasonic inspection. When ultrasonic inspection is used experimentally to investigate the presence of defects in composite materials, the microscopic defect geometry may cause signal characteristics that are difficult to interpret. Hence, modeling of this interaction is key to improve our understanding and way of interpreting the acquired ultrasonic signals. To model the true interaction of the ultrasonic wave field with such defect structures as pores, cracks or delamination, a realistic three dimensional geometry reconstruction is required. We present a 3D-image based reconstruction process which converts computed tomography data in adequate surface representations ready to be embedded for processing with finite element methods. Subsequent modeling using these geometries uses a multi-scale and multi-physics simulation approach which results in quantitative A-Scan ultrasonic signals which can be directly compared with experimental signals. Therefore, besides the properties of the composite material, a full transducer implementation, piezoelectric conversion and simultaneous modeling of the attached circuit is applied. Comparison between simulated and experimental signals provides very good agreement in electrical voltage amplitude and the signal arrival time and thus validates the proposed modeling approach. Simulating ultrasound wave propagation in a medium with a realistic shape of the geometry clearly shows a difference in how the disturbance of the waves takes place and finally allows more realistic modeling of A-scans. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Calibration of phoswich-based lung counting system using realistic chest phantom.

    PubMed

    Manohari, M; Mathiyarasu, R; Rajagopal, V; Meenakshisundaram, V; Indira, R

    2011-03-01

    A phoswich detector, housed inside a low background steel room, coupled with a state-of-art pulse shape discrimination (PSD) electronics is recently established at Radiological Safety Division of IGCAR for in vivo monitoring of actinides. The various parameters of PSD electronics were optimised to achieve efficient background reduction in low-energy regions. The PSD with optimised parameters has reduced steel room background from 9.5 to 0.28 cps in the 17 keV region and 5.8 to 0.3 cps in the 60 keV region. The Figure of Merit for the timing spectrum of the system is 3.0. The true signal loss due to PSD was found to be less than 2 %. The phoswich system was calibrated with Lawrence Livermore National Laboratory realistic chest phantom loaded with (241)Am activity tagged lung set. Calibration factors for varying chest wall composition and chest wall thickness in terms of muscle equivalent chest wall thickness were established. (241)Am activity in the JAERI phantom which was received as a part of IAEA inter-comparison exercise was estimated. This paper presents the optimisation of PSD electronics and the salient results of the calibration.

  10. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  11. A realist evaluation of the management of a well- performing regional hospital in Ghana

    PubMed Central

    2010-01-01

    Background Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. Methods We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. Results We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development

  12. A realist evaluation of the management of a well-performing regional hospital in Ghana.

    PubMed

    Marchal, Bruno; Dedzo, McDamien; Kegels, Guy

    2010-01-25

    Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. This case suggests that a well

  13. The Development of Semantic Knowledge Systems for Realistic Goals.

    ERIC Educational Resources Information Center

    Goldman, Susan R.

    This study investigates age differences in children's semantic expectations regarding causal relations in stories about three realistic goal situations (being friendly, getting a dog, and doing chores). Twenty children at each of three age levels (ages 6, 9, and 12) were asked to produce stories and answer probe questions about wanting and not…

  14. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras.

    PubMed

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.

  15. Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties

    NASA Astrophysics Data System (ADS)

    Karp, Jonathan; Torres-Herrera, Jonathan; TáVora, Marco; Santos, Lea

    We study the static and dynamical properties of isolated spin 1/2 systems as prototypes of many-body quantum systems and compare the results to those of full random matrices from a Gaussian orthogonal ensemble. Full random matrices do not represent realistic systems, because they imply that all particles interact at the same time, as opposed to realistic Hamiltonians, which are sparse and have only few-body interactions. Nevertheless, with full random matrices we can derive analytical results that can be used as references and bounds for the corresponding properties of realistic systems. In particular, we show that the results for the Shannon information entropy are very similar to those for the von Neumann entanglement entropy, with the former being computationally less expensive. We also discuss the behavior of the survival probability of the initial state at different time scales and show that it contains more information about the system than the entropies. Support from the NSF Grant No. DMR-1147430.

  16. Blind estimation of blur in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial

  17. Steering Microbubbles in Physiologically Realistic Flows Using the Bjerknes Force

    NASA Astrophysics Data System (ADS)

    Clark, Alicia; Aliseda, Alberto

    2017-11-01

    Ultrasound contrast agents (UCAs) are lipid-coated microbubbles that are used to increase contrast in ultrasound imaging due to their ability to scatter sound. Additionally, UCAs can be used in conjunction with ultrasound in medical applications such as targeted drug delivery and thrombolysis. These applications utilize the Bjerknes force, an ultrasound-induced force caused by the phase difference between the incoming ultrasound pressure wave and the microbubble volume oscillations. The dynamics of microbubbles under ultrasound excitation have been studied thoroughly in stagnant fluid baths; however, understanding of the fundamental physics of microbubbles in physiologically realistic flows is lacking. An in vitroexperiment that reproduces the dynamics (Reynolds and Womersley numbers) of a medium-sized blood vessel was used to explore the behavior of microbubbles. Using Lagrangian tracking, the trajectory of each individual bubble was reconstructed using information obtained from high speed imaging. The balance of hydrodynamic forces (lift, drag, added mass, etc.) against the primary Bjerknes force was analyzed. The results show that an increase in ultrasound pulse repetition frequency leads to a linear increase in the Bjerknes force and the increase in the force is quadratic with the amplitude of the excitation.

  18. Effects of nasal drug delivery device and its orientation on sprayed particle deposition in a realistic human nasal cavity.

    PubMed

    Tong, Xuwen; Dong, Jingliang; Shang, Yidan; Inthavong, Kiao; Tu, Jiyuan

    2016-10-01

    In this study, the effects of nasal drug delivery device and the spray nozzle orientation on sprayed droplets deposition in a realistic human nasal cavity were numerically studied. Prior to performing the numerical investigation, an in-house designed automated actuation system representing mean adults actuation force was developed to produce realistic spray plume. Then, the spray plume development was filmed by high speed photography system, and spray characteristics such as spray cone angle, break-up length, and average droplet velocity were obtained through off-line image analysis. Continuing studies utilizing those experimental data as boundary conditions were applied in the following numerical spray simulations using a commercially available nasal spray device, which was inserted into a realistic adult nasal passage with external facial features. Through varying the particle releasing direction, the deposition fractions of selected particle sizes on the main nasal passage for targeted drug delivery were compared. The results demonstrated that the middle spray direction showed superior spray efficiency compared with upper or lower directions, and the 10µm agents were the most suitable particle size as the majority of sprayed agents can be delivered to the targeted area, the main passage. This study elaborates a comprehensive approach to better understand nasal spray mechanism and evaluate its performance for existing nasal delivery practices. Results of this study can assist the pharmaceutical industry to improve the current design of nasal drug delivery device and ultimately benefit more patients through optimized medications delivery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  20. Mice and the A-Bomb: Irradiation Systems for Realistic Exposure Scenarios.

    PubMed

    Garty, Guy; Xu, Yanping; Elliston, Carl; Marino, Stephen A; Randers-Pehrson, Gerhard; Brenner, David J

    2017-04-01

    Validation of biodosimetry assays is normally performed with acute exposures to uniform external photon fields. Realistically, exposure to a radiological dispersal device or reactor leak will include exposure to low dose rates and likely exposure to ingested radionuclides. An improvised nuclear device will likely include a significant neutron component in addition to a mixture of high- and low-dose-rate photons and ingested radionuclides. We present here several novel irradiation systems developed at the Center for High Throughput Minimally Invasive Radiation Biodosimetry to provide more realistic exposures for testing of novel biodosimetric assays. These irradiators provide a wide range of dose rates (from Gy/s to Gy/week) as well as mixed neutron/photon fields mimicking an improvised nuclear device.

  1. Mice and the A-Bomb: Irradiation Systems for Realistic Exposure Scenarios

    PubMed Central

    Garty, Guy; Xu, Yanping; Elliston, Carl; Marino, Stephen A.; Randers-Pehrson, Gerhard; Brenner, David J.

    2017-01-01

    Validation of biodosimetry assays is normally performed with acute exposures to uniform external photon fields. Realistically, exposure to a radiological dispersal device or reactor leak will include exposure to low dose rates and likely exposure to ingested radionuclides. An improvised nuclear device will likely include a significant neutron component in addition to a mixture of high- and low-dose-rate photons and ingested radionuclides. We present here several novel irradiation systems developed at the Center for High Throughput Minimally Invasive Radiation Biodosimetry to provide more realistic exposures for testing of novel biodosimetric assays. These irradiators provide a wide range of dose rates (from Gy/s to Gy/week) as well as mixed neutron/photon fields mimicking an improvised nuclear device. PMID:28211757

  2. Optimized methods for epilepsy therapy development using an etiologically realistic model of focal epilepsy in the rat

    PubMed Central

    Eastman, Clifford L.; Fender, Jason S.; Temkin, Nancy R.; D’Ambrosio, Raimondo

    2015-01-01

    Conventionally developed antiseizure drugs fail to control epileptic seizures in about 30% of patients, and no treatment prevents epilepsy. New etiologically realistic, syndrome-specific epilepsy models are expected to identify better treatments by capturing currently unknown ictogenic and epileptogenic mechanisms that operate in the corresponding patient populations. Additionally, the use of electrocorticography permits better monitoring of epileptogenesis and the full spectrum of acquired seizures, including focal nonconvulsive seizures that are typically difficult to treat in humans. Thus, the combined use of etiologically realistic models and electrocorticography may improve our understanding of the genesis and progression of epilepsy, and facilitate discovery and translation of novel treatments. However, this approach is labor intensive and must be optimized. To this end, we used an etiologically realistic rat model of posttraumatic epilepsy, in which the initiating fluid percussion injury closely replicates contusive closed-head injury in humans, and has been adapted to maximize epileptogenesis and focal non-convulsive seizures. We obtained week-long 5-electrode electrocorticography 1 month post-injury, and used a Monte-Carlo-based non-parametric bootstrap strategy to test the impact of electrode montage design, duration-based seizure definitions, group size and duration of recordings on the assessment of posttraumatic epilepsy, and on statistical power to detect antiseizure and antiepileptogenic treatment effects. We found that use of seizure definition based on clinical criteria rather than event duration, and of recording montages closely sampling the activity of epileptic foci, maximize the power to detect treatment effects. Detection of treatment effects was marginally improved by prolonged recording, and 24 h recording epochs were sufficient to provide 80% power to detect clinically interesting seizure control or prevention of seizures with small groups

  3. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  4. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Unsteady transonic flow calculations for realistic aircraft configurations

    NASA Technical Reports Server (NTRS)

    Batina, John T.; Seidel, David A.; Bland, Samuel R.; Bennett, Robert M.

    1987-01-01

    A transonic unsteady aerodynamic and aeroelasticity code has been developed for application to realistic aircraft configurations. The new code is called CAP-TSD which is an acronym for Computational Aeroelasticity Program - Transonic Small Disturbance. The CAP-TSD code uses a time-accurate approximate factorization (AF) algorithm for solution of the unsteady transonic small-disturbance equation. The AF algorithm is very efficient for solution of steady and unsteady transonic flow problems. It can provide accurate solutions in only several hundred time steps yielding a significant computational cost savings when compared to alternative methods. The new code can treat complete aircraft geometries with multiple lifting surfaces and bodies including canard, wing, tail, control surfaces, launchers, pylons, fuselage, stores, and nacelles. Applications are presented for a series of five configurations of increasing complexity to demonstrate the wide range of geometrical applicability of CAP-TSD. These results are in good agreement with available experimental steady and unsteady pressure data. Calculations for the General Dynamics one-ninth scale F-16C aircraft model are presented to demonstrate application to a realistic configuration. Unsteady results for the entire F-16C aircraft undergoing a rigid pitching motion illustrated the capability required to perform transonic unsteady aerodynamic and aeroelastic analyses for such configurations.

  6. Electrical Wave Propagation in a Minimally Realistic Fiber Architecture Model of the Left Ventricle

    NASA Astrophysics Data System (ADS)

    Song, Xianfeng; Setayeshgar, Sima

    2006-03-01

    Experimental results indicate a nested, layered geometry for the fiber surfaces of the left ventricle, where fiber directions are approximately aligned in each surface and gradually rotate through the thickness of the ventricle. Numerical and analytical results have highlighted the importance of this rotating anisotropy and its possible destabilizing role on the dynamics of scroll waves in excitable media with application to the heart. Based on the work of Peskin[1] and Peskin and McQueen[2], we present a minimally realistic model of the left ventricle that adequately captures the geometry and anisotropic properties of the heart as a conducting medium while being easily parallelizable, and computationally more tractable than fully realistic anatomical models. Complementary to fully realistic and anatomically-based computational approaches, studies using such a minimal model with the addition of successively realistic features, such as excitation-contraction coupling, should provide unique insight into the basic mechanisms of formation and obliteration of electrical wave instabilities. We describe our construction, implementation and validation of this model. [1] C. S. Peskin, Communications on Pure and Applied Mathematics 42, 79 (1989). [2] C. S. Peskin and D. M. McQueen, in Case Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology, 309(1996)

  7. Realistic diversity loss and variation in soil depth independently affect community-level plant nitrogen use.

    PubMed

    Selmants, Paul C; Zavaleta, Erika S; Wolf, Amelia A

    2014-01-01

    Numerous experiments have demonstrated that diverse plant communities use nitrogen (N) more completely and efficiently, with implications for how species conservation efforts might influence N cycling and retention in terrestrial ecosystems. However, most such experiments have randomly manipulated species richness and minimized environmental heterogeneity, two design aspects that may reduce applicability to real ecosystems. Here we present results from an experiment directly comparing how realistic and randomized plant species losses affect plant N use across a gradient of soil depth in a native-dominated serpentine grassland in California. We found that the strength of the species richness effect on plant N use did not increase with soil depth in either the realistic or randomized species loss scenarios, indicating that the increased vertical heterogeneity conferred by deeper soils did not lead to greater complementarity among species in this ecosystem. Realistic species losses significantly reduced plant N uptake and altered N-use efficiency, while randomized species losses had no effect on plant N use. Increasing soil depth positively affected plant N uptake in both loss order scenarios but had a weaker effect on plant N use than did realistic species losses. Our results illustrate that realistic species losses can have functional consequences that differ distinctly from randomized losses, and that species diversity effects can be independent of and outweigh those of environmental heterogeneity on ecosystem functioning. Our findings also support the value of conservation efforts aimed at maintaining biodiversity to help buffer ecosystems against increasing anthropogenic N loading.

  8. Automated Finger Spelling by Highly Realistic 3D Animation

    ERIC Educational Resources Information Center

    Adamo-Villani, Nicoletta; Beni, Gerardo

    2004-01-01

    We present the design of a new 3D animation tool for self-teaching (signing and reading) finger spelling the first basic component in learning any sign language. We have designed a highly realistic hand with natural animation of the finger motions. Smoothness of motion (in real time) is achieved via programmable blending of animation segments. The…

  9. Estimating acreage by double sampling using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)

    1982-01-01

    Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.

  10. Data-Adaptive Bias-Reduced Doubly Robust Estimation.

    PubMed

    Vermeulen, Karel; Vansteelandt, Stijn

    2016-05-01

    Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

  11. EIT forward problem parallel simulation environment with anisotropic tissue and realistic electrode models.

    PubMed

    De Marco, Tommaso; Ries, Florian; Guermandi, Marco; Guerrieri, Roberto

    2012-05-01

    Electrical impedance tomography (EIT) is an imaging technology based on impedance measurements. To retrieve meaningful insights from these measurements, EIT relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of current flows therein. The nonhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeoff between physical accuracy and technical feasibility, which at present severely limits the capabilities of EIT. This work presents a complete algorithmic flow for an accurate EIT modeling environment featuring high anatomical fidelity with a spatial resolution equal to that provided by an MRI and a novel realistic complete electrode model implementation. At the same time, we demonstrate that current graphics processing unit (GPU)-based platforms provide enough computational power that a domain discretized with five million voxels can be numerically modeled in about 30 s.

  12. An analysis of thrust of a realistic solar sail with focus on a flight validation mission in a geocentric orbit

    NASA Astrophysics Data System (ADS)

    Campbell, Bruce A.

    Several scientifically important space flight missions have been identified that, at this time, can only be practically achieved using a solar sail propulsion system. These missions take advantage of the potentially continuous force on the sail, provided by solar radiation, to produce significant changes in the spacecraft's velocity, in both magnitude and/or direction, without the need for carrying the enormous amount of fuel that conventional propulsion systems would require to provide the same performance. However, to provide thrust levels that would support these missions requires solar sail areas in the (tens of) thousands of square meter sizes. To realize this, many technical areas must be developed further and demonstrated in space before solar sails will be accepted as a viable space mission propulsion system. One of these areas concerns understanding the propulsion performance of a realistic solar sail well enough for mission planning. Without this understanding, solar sail orbits could not be predicted well enough to meet defined mission requirements, such as rendezvous or station-keeping, and solar sail orbit optimization, such as minimizing flight time, could be close to impossible. In most mission studies, either an "ideal" sail's performance is used for mission planning, or some top-level assumptions of certain nonideal sail characteristics are incorporated to give a slightly better estimate of the sail performance. This paper identifies the major sources of solar sail thrust performance uncertainty, and analyzes the most significant ones to provide a more comprehensive understanding of thrust generation by a "realistic" solar sail. With this understanding, mission planners will be able to more confidently and accurately estimate the capabilities of such a system. The first solar sail mission will likely be a system validation mission, using a relatively small sail in a geocentric (Earth-centered) orbit. The author has been involved in conceptual

  13. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  14. A real-time signal combining system for Ka-band feed arrays using maximum-likelihood weight estimates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1990-01-01

    A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.

  15. Poster - Thur Eve - 11: A realistic respiratory trace generator and its application to respiratory management techniques.

    PubMed

    Quirk, S; Becker, N; Smith, W L

    2012-07-01

    Respiratory motion complicates radiotherapy treatment of thoracic and abdominal tumours. Simplified respiratory motions such as sinusoidal and single patient traces are often used to determine the impact of motion on respiratory management techniques in radiotherapy. Such simplifications only accurately model a small portion of patients, as most patients exhibit variability and irregularity beyond these models. We have preformed a comprehensive analysis of respiratory motion and developed a software tool that allows for explicit inclusion of variability. We utilize our realistic respiratory generator to customize respiratory traces to test the robustness of the estimate of internal gross target volumes (IGTV) by 4DCT and CBCT. We confirmed that good agreement is found between 4DCT and CBCT for regular breathing motion. When amplitude variability was introduced the accuracy of the estimate slightly, but the absolute differences were still < 3 mm for both modalities. Poor agreement was shown with the addition of baseline drifts. Both modalities were found to underestimate the IGTV by as much as 30% for 4DCT and 25% for CBCT. Both large and small drifts deteriorated the estimate accuracy. The respiratory trace generator was advantageous for examining the difference between 4DCT and CBCT IGTV estimation under variable motions. It provided useful implementation abilities to test specific attributes of respiratory motion and detected issues that were not seen with the regular motion studies. This is just one example of how the respiratory trace generator can be utilized to test applications of respiratory management techniques. © 2012 American Association of Physicists in Medicine.

  16. Thermal noise calculation method for precise estimation of the signal-to-noise ratio of ultra-low-field MRI with an atomic magnetometer.

    PubMed

    Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo

    2012-02-01

    In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. What Today's Educational Technology Needs: Defensible Evaluations and Realistic Implementation.

    ERIC Educational Resources Information Center

    Roweton, William E.; And Others

    It is argued that in order to make computer assisted instruction effective in the schools, educators should pay more attention to implementation issues (including modifying teacher attitudes, changing classroom routines, and offering realistic technical training and support) and to producing understandable product and performance evaluations.…

  18. Simple calculator to estimate the medical cost of diabetes in sub-Saharan Africa

    PubMed Central

    Alouki, Koffi; Delisle, Hélène; Besançon, Stéphane; Baldé, Naby; Sidibé-Traoré, Assa; Drabo, Joseph; Djrolo, François; Mbanya, Jean-Claude; Halimi, Serge

    2015-01-01

    AIM: To design a medical cost calculator and show that diabetes care is beyond reach of the majority particularly patients with complications. METHODS: Out-of-pocket expenditures of patients for medical treatment of type-2 diabetes were estimated based on price data collected in Benin, Burkina Faso, Guinea and Mali. A detailed protocol for realistic medical care of diabetes and its complications in the African context was defined. Care components were based on existing guidelines, published data and clinical experience. Prices were obtained in public and private health facilities. The cost calculator used Excel. The cost for basic management of uncomplicated diabetes was calculated per person and per year. Incremental costs were also computed per annum for chronic complications and per episode for acute complications. RESULTS: Wide variations of estimated care costs were observed among countries and between the public and private healthcare system. The minimum estimated cost for the treatment of uncomplicated diabetes (in the public sector) would amount to 21%-34% of the country’s gross national income per capita, 26%-47% in the presence of retinopathy, and above 70% for nephropathy, the most expensive complication. CONCLUSION: The study provided objective evidence for the exorbitant medical cost of diabetes considering that no medical insurance is available in the study countries. Although the calculator only estimates the cost of inaction, it is innovative and of interest for several stakeholders. PMID:26617974

  19. Comparison of Sun-Induced Chlorophyll Fluorescence Estimates Obtained from Four Portable Field Spectroradiometers

    NASA Technical Reports Server (NTRS)

    Julitta, Tommaso; Corp, Lawrence A.; Rossini, Micol; Burkart, Andreas; Cogliati, Sergio; Davies, Neville; Hom, Milton; Mac Arthur, Alasdair; Middleton, Elizabeth M.; Rascher, Uwe; hide

    2016-01-01

    Remote Sensing of Sun-Induced Chlorophyll Fluorescence (SIF) is a research field of growing interest because it offers the potential to quantify actual photosynthesis and to monitor plant status. New satellite missions from the European Space Agency, such as the Earth Explorer 8 FLuorescence EXplorer (FLEX) mission-scheduled to launch in 2022 and aiming at SIF mapping-and from the National Aeronautics and Space Administration (NASA) such as the Orbiting Carbon Observatory-2 (OCO-2) sampling mission launched in July 2014, provide the capability to estimate SIF from space. The detection of the SIF signal from airborne and satellite platform is difficult and reliable ground level data are needed for calibration/validation. Several commercially available spectroradiometers are currently used to retrieve SIF in the field. This study presents a comparison exercise for evaluating the capability of four spectroradiometers to retrieve SIF. The results show that an accurate far-red SIF estimation can be achieved using spectroradiometers with an ultrafine resolution (less than 1 nm), while the red SIF estimation requires even higher spectral resolution (less than 0.5 nm). Moreover, it is shown that the Signal to Noise Ratio (SNR) plays a significant role in the precision of the far-red SIF measurements.

  20. Estimation of strait transport in the East China Sea

    NASA Astrophysics Data System (ADS)

    Moon, J.; Hirose, N.; Usui, N.; Tsujino, H.

    2010-12-01

    Volume transport through the major channels is still diverse in realistic eddy-resolving models. For instance, time-mean transport through the Tokara Strait is predicted as 16.9Sv by Maltrud and McClean (2005) or 36-72Sv by Hurlburt et al. (1996). However the difference may be decreased by constraining measurement data, i.e., data assimilation. The assimilated estimates from two different systems of Meteorological Research Institute and Kyushu University (MOVE-WNP and DREAMS_B) show realistic averages of 22-23Sv through the Tokara Strait. Inverse estimation of adjustable parameters implies that reduction of wind stress and strong vertical viscosity are crucial to prevent excessive transport and associated instabilities in a forward model. It is noted that both of the assimilated results show a deep northward flow of 3-4Sv through the Kerama Gap in Ryukyu Islands. The core depth (~500m) of this subsurface current is similar to Ryukyu Current. Further analysis shows coherent changes of Soya and Tsushima Warm Currents, which is consistent to the Okhotsk wind theory of Tsujino et al. (2008). On the other hand, the changes of Tsushima Strait transport are nearly independent from the Kuroshio or the Taiwan Warm Current.

  1. Estimating Driving Performance Based on EEG Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Wu, Ruei-Cheng; Jung, Tzyy-Ping; Liang, Sheng-Fu; Huang, Teng-Yi

    2005-12-01

    The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  2. Texture and savoury taste influences on food intake in a realistic hot lunch time meal.

    PubMed

    Forde, C G; van Kuijk, N; Thaler, T; de Graaf, C; Martin, N

    2013-01-01

    Previous studies with model foods have shown that softer textures lead to higher eating rates and higher ad libitum food intake and higher intensity of salt taste has been shown to result in a lower ad libitum food intake. These observations have yet to be replicated in the context of realistic solid hot meal components. The objective of the present study was to assess the effect of texture and taste on the ad libitum intake of a realistic hot lunchtime meal. The meals consisted of potatoes, carrots, steak and gravy varied according to a 2 (texture: mashed vs. whole) × 2 (taste: standard taste vs. strong taste) design. The texture dimension referred to mashed potatoes, mashed carrots and pieces of steak vs. whole boiled potatoes, whole boiled carrots and whole steak. The taste was varied by manipulating the taste intensity of the gravy to be either standard or high intensity savoury taste. The current study used a between groups, single course ad libitum design whereby subjects were recruited for a one off meal study, during which their food intake was measured. The four groups consisted of about 40 subjects (mashed, standard, n=37; mashed, savoury n=39; whole, standard n=40; and whole, savoury n=41) matched for age (average age=44.8 ± 5.3), gender (on average 19 males and 20 females), normal BMI (average 22.6 ± 1.7) and dietary restraint score (DEBQ score=1.74 ± 0.6). The results showed that the estimated means of the intake of the two mashed conditions was 563.2 ± 20.3g and intake of whole meal was 527.5 ± 20.0 g (p=0.23). The texture effect was significant in the higher savoury condition with an average of 91 g less food consumed in the solid-savoury meal than in the mashed-savoury meal. This effect was not replicated in the standard gravy condition, with no significant difference between solid and mashed textures. This was reflected in an interaction effect that was approaching significance (p=0.051). The estimated mean eating rate in the two mashed

  3. Towards a Realist Sociology of Education: A Polyphonic Review Essay

    ERIC Educational Resources Information Center

    Grenfell, Michael; Hood, Susan; Barrett, Brian D.; Schubert, Dan

    2017-01-01

    This review essay evaluates Karl Maton's "Knowledge and Knowers: Towards a Realist Sociology of Education" as a recent examination of the sociological causes and effects of education in the tradition of the French social theorist Pierre Bourdieu and the British educational sociologist Basil Bernstein. Maton's book synthesizes the…

  4. A realistic chemical system presenting a self-organized critical behavior

    NASA Astrophysics Data System (ADS)

    Gaveau, Bernard; Latrémolière, Daniel; Moreau, Michel

    2003-04-01

    We consider a realistic example of chemical system which presents self-organized criticality. We can study the kinetic equations analytically, and show that the conditions for self-organized criticality are satisfied. We find power relaxation laws for certain variables near the critical state, confirming the self-organized critical behavior.

  5. Representations of Adoption in Contemporary Realistic Fiction for Young Adults

    ERIC Educational Resources Information Center

    Parsons, Sue Christian; Fuxa, Robin; Kander, Faryl; Hardy, Dana

    2017-01-01

    In this critical content analysis of thirty-seven contemporary realistic fiction books about adoption, the authors examine how adoption and adoptive families are depicted in young adult (YA) literature. The critical literacy theoretical frame brings into focus significant social implications of these depictions as the researchers illuminate and…

  6. Extending the Precipitation Map Offshore Using Daily and 3-Hourly Combined Precipitation Estimates

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)

    2001-01-01

    One of the difficulties in studying landfalling extratropical cyclones along the Pacific Coast is the lack of antecedent data over the ocean, including precipitation. Recent research on combining various satellite-based precipitation estimates opens the possibility of realistic precipitation estimates on a global 1 deg. x 1 deg. latitude-longitude grid at the daily or even 3-hourly interval. The goal in this work is to provide quantitative precipitation estimates that correctly represent the precipitation- related variables in the hydrological cycle: surface accumulations (fresh-water flux into oceans), frequency and duration statistics, net latent heating, etc.

  7. Electron percolation in realistic models of carbon nanotube networks

    NASA Astrophysics Data System (ADS)

    Simoneau, Louis-Philippe; Villeneuve, Jérémie; Rochefort, Alain

    2015-09-01

    The influence of penetrable and curved carbon nanotubes (CNT) on the charge percolation in three-dimensional disordered CNT networks have been studied with Monte-Carlo simulations. By considering carbon nanotubes as solid objects but where the overlap between their electron cloud can be controlled, we observed that the structural characteristics of networks containing lower aspect ratio CNT are highly sensitive to the degree of penetration between crossed nanotubes. Following our efficient strategy to displace CNT to different positions to create more realistic statistical models, we conclude that the connectivity between objects increases with the hard-core/soft-shell radii ratio. In contrast, the presence of curved CNT in the random networks leads to an increasing percolation threshold and to a decreasing electrical conductivity at saturation. The waviness of CNT decreases the effective distance between the nanotube extremities, hence reducing their connectivity and degrading their electrical properties. We present the results of our simulation in terms of thickness of the CNT network from which simple structural parameters such as the volume fraction or the carbon nanotube density can be accurately evaluated with our more realistic models.

  8. Simulation of Combustion Systems with Realistic g-jitter

    NASA Technical Reports Server (NTRS)

    Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.

    2003-01-01

    In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.

  9. Music therapy for palliative care: A realist review.

    PubMed

    McConnell, Tracey; Porter, Sam

    2017-08-01

    Music therapy has experienced a rising demand as an adjunct therapy for symptom management among palliative care patients. We conducted a realist review of the literature to develop a greater understanding of how music therapy might benefit palliative care patients and the contextual mechanisms that promote or inhibit its successful implementation. We searched electronic databases (CINAHL, Embase, Medline, and PsychINFO) for literature containing information on music therapy for palliative care. In keeping with the realist approach, we examined all relevant literature to develop theories that could explain how music therapy works. A total of 51 articles were included in the review. Music therapy was found to have a therapeutic effect on the physical, psychological, emotional, and spiritual suffering of palliative care patients. We also identified program mechanisms that help explain music therapy's therapeutic effects, along with facilitating contexts for implementation. Music therapy may be an effective nonpharmacological approach to managing distressing symptoms in palliative care patients. The findings also suggest that group music therapy may be a cost-efficient and effective way to support staff caring for palliative care patients. We encourage others to continue developing the evidence base in order to expand our understanding of how music therapy works, with the aim of informing and improving the provision of music therapy for palliative care patients.

  10. Lower limb estimation from sparse landmarks using an articulated shape model.

    PubMed

    Zhang, Ju; Fernandez, Justin; Hislop-Jambrich, Jacqui; Besier, Thor F

    2016-12-08

    Rapid generation of lower limb musculoskeletal models is essential for clinically applicable patient-specific gait modeling. Estimation of muscle and joint contact forces requires accurate representation of bone geometry and pose, as well as their muscle attachment sites, which define muscle moment arms. Motion-capture is a routine part of gait assessment but contains relatively sparse geometric information. Standard methods for creating customized models from motion-capture data scale a reference model without considering natural shape variations. We present an articulated statistical shape model of the left lower limb with embedded anatomical landmarks and muscle attachment regions. This model is used in an automatic workflow, implemented in an easy-to-use software application, that robustly and accurately estimates realistic lower limb bone geometry, pose, and muscle attachment regions from seven commonly used motion-capture landmarks. Estimated bone models were validated on noise-free marker positions to have a lower (p=0.001) surface-to-surface root-mean-squared error of 4.28mm, compared to 5.22mm using standard isotropic scaling. Errors at a variety of anatomical landmarks were also lower (8.6mm versus 10.8mm, p=0.001). We improve upon standard lower limb model scaling methods with shape model-constrained realistic bone geometries, regional muscle attachment sites, and higher accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Cooperative Vehicular Traffic Monitoring in Realistic Low Penetration Scenarios: The COLOMBO Experience

    PubMed Central

    Caselli, Federico; Corradi, Antonio

    2018-01-01

    The relevance of effective and efficient solutions for vehicle traffic surveillance is widely recognized in order to enable advanced strategies for traffic management, e.g., based on dynamically adaptive and decentralized traffic light management. However, most related solutions in the literature, based on the powerful enabler of cooperative vehicular communications, assume the complete penetration rate of connectivity/communication technologies (and willingness to participate in the collaborative surveillance service) over the targeted vehicle population, thus making them not applicable nowadays. The paper originally proposes an innovative solution for cooperative traffic surveillance based on vehicular communications capable of: (i) working with low penetration rates of the proposed technology and (ii) of collecting a large set of monitoring data about vehicle mobility in targeted areas of interest. The paper presents insights and lessons learnt from the design and implementation work of the proposed solution. Moreover, it reports extensive performance evaluation results collected on realistic simulation scenarios based on the usage of iTETRIS with real traces of vehicular traffic of the city of Bologna. The reported results show the capability of our proposal to consistently estimate the real vehicular traffic even with low penetration rates of our solution (only 10%). PMID:29522427

  12. Alternative supply specifications and estimates of regional supply and demand for stumpage.

    Treesearch

    Kent P. Connaughton; David H. Jackson; Gerard A. Majerus

    1988-01-01

    Four plausible sets of stumpage supply and demand equations were developed and estimated; the demand equation was the same for each set, although the supply equation differed. The supply specifications varied from the model of regional excess demand in which National Forest harvest levels were assumed fixed to a more realistic model in which the harvest on the National...

  13. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  14. Landslides in Colorado, USA--Impacts and loss estimation for 2010

    USGS Publications Warehouse

    Highland, Lynn M.

    2012-01-01

    The focus of this study is to investigate landslides and consequent losses which affected Colorado in the year 2010. By obtaining landslide reports from a variety of sources, this report will demonstrate the feasibility of creating a profile of landslides and their effects on communities. A short overview of the current status of landslide-loss studies for the United States is introduced, followed by a compilation of landslide occurrence and associated losses and impacts which affected Colorado for the year 2010. Direct costs are summarized in descriptive and tabular form, and where possible, indirect costs are also noted or estimated. Total direct costs of landslides in Colorado for the year 2010 were approximately $9,149,335.00 (2010 U.S. dollars). (Since not all data for damages and costs were obtained, this figure realistically could be considerably higher.) Indirect costs were noted where available but are not totaled due to the fact that most indirect costs were not obtainable for various reasons outlined later in this report. Casualty data are considered as being within the scope of loss evaluation, and are reported in Appendix 1, but are not assigned dollar losses. More details on the source material for loss data not found in the reference section are reported in Appendix 2, and Appendix 3 summarizes notes on landslide-loss investigations in general and lessons learned during the process of loss-data collection.

  15. Empirical Evidence for Niss' "Implemented Anticipation" in Mathematising Realistic Situations

    ERIC Educational Resources Information Center

    Stillman, Gloria; Brown, Jill P.

    2012-01-01

    Mathematisation of realistic situations is an on-going focus of research. Classroom data from a Year 9 class participating in a program of structured modelling of real situations was analysed for evidence of Niss's theoretical construct, implemented anticipation, during mathematisation. Evidence was found for two of three proposed aspects. In…

  16. Rehand: Realistic electric prosthetic hand created with a 3D printer.

    PubMed

    Yoshikawa, Masahiro; Sato, Ryo; Higashihara, Takanori; Ogasawara, Tsukasa; Kawashima, Noritaka

    2015-01-01

    Myoelectric prosthetic hands provide an appearance with five fingers and a grasping function to forearm amputees. However, they have problems in weight, appearance, and cost. This paper reports on the Rehand, a realistic electric prosthetic hand created with a 3D printer. It provides a realistic appearance that is same as the cosmetic prosthetic hand and a grasping function. A simple link mechanism with one linear actuator for grasping and 3D printed parts achieve low cost, light weight, and ease of maintenance. An operating system based on a distance sensor provides a natural operability equivalent to the myoelectric control system. A supporter socket allows them to wear the prosthetic hand easily. An evaluation using the Southampton Hand Assessment Procedure (SHAP) demonstrated that an amputee was able to operate various objects and do everyday activities with the Rehand.

  17. Large-System Transformation in Health Care: A Realist Review

    PubMed Central

    Best, Allan; Greenhalgh, Trisha; Lewis, Steven; Saul, Jessie E; Carroll, Simon; Bitz, Jennifer

    2012-01-01

    Context An evidence base that addresses issues of complexity and context is urgently needed for large-system transformation (LST) and health care reform. Fundamental conceptual and methodological challenges also must be addressed. The Saskatchewan Ministry of Health in Canada requested a six-month synthesis project to guide four major policy development and strategy initiatives focused on patient- and family-centered care, primary health care renewal, quality improvement, and surgical wait lists. The aims of the review were to analyze examples of successful and less successful transformation initiatives, to synthesize knowledge of the underlying mechanisms, to clarify the role of government, and to outline options for evaluation. Methods We used realist review, whose working assumption is that a particular intervention triggers particular mechanisms of change. Mechanisms may be more or less effective in producing their intended outcomes, depending on their interaction with various contextual factors. We explain the variations in outcome as the interplay between context and mechanisms. We nested this analytic approach in a macro framing of complex adaptive systems (CAS). Findings Our rapid realist review identified five “simple rules” of LST that were likely to enhance the success of the target initiatives: (1) blend designated leadership with distributed leadership; (2) establish feedback loops; (3) attend to history; (4) engage physicians; and (5) include patients and families. These principles play out differently in different contexts affecting human behavior (and thereby contributing to change) through a wide range of different mechanisms. Conclusions Realist review methodology can be applied in combination with a complex system lens on published literature to produce a knowledge synthesis that informs a prospective change effort in large-system transformation. A collaborative process engaging both research producers and research users contributes to local

  18. Stationary echo canceling in velocity estimation by time-domain cross-correlation.

    PubMed

    Jensen, J A

    1993-01-01

    The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated that the filtration results in a velocity-dependent degradation of the signal-to-noise ratio. An analytic expression is given for the degradation for a realistic pulse. The probability of correct detection at low signal-to-noise ratios is influenced by signal-to-noise ratio, transducer bandwidth, center frequency, number of samples in the range gate, and number of A-lines employed in the estimation. Quantitative results calculated by a simple simulation program are given for the variation in probability from these parameters. An index reflecting the reliability of the estimate at hand can be calculated from the actual cross-correlation estimate by a simple formula and used in rejecting poor estimates or in displaying the reliability of the velocity estimated.

  19. Using realist synthesis to understand the mechanisms of interprofessional teamwork in health and social care.

    PubMed

    Hewitt, Gillian; Sims, Sarah; Harris, Ruth

    2014-11-01

    Realist synthesis offers a novel and innovative way to interrogate the large literature on interprofessional teamwork in health and social care teams. This article introduces realist synthesis and its approach to identifying and testing the underpinning processes (or "mechanisms") that make an intervention work, the contexts that trigger those mechanisms and their subsequent outcomes. A realist synthesis of the evidence on interprofessional teamwork is described. Thirteen mechanisms were identified in the synthesis and findings for one mechanism, called "Support and value" are presented in this paper. The evidence for the other twelve mechanisms ("collaboration and coordination", "pooling of resources", "individual learning", "role blurring", "efficient, open and equitable communication", "tactical communication", "shared responsibility and influence", "team behavioural norms", "shared responsibility and influence", "critically reviewing performance and decisions", "generating and implementing new ideas" and "leadership") are reported in a further three papers in this series. The "support and value" mechanism referred to the ways in which team members supported one another, respected other's skills and abilities and valued each other's contributions. "Support and value" was present in some, but far from all, teams and a number of contexts that explained this variation were identified. The article concludes with a discussion of the challenges and benefits of undertaking this realist synthesis.

  20. The effectiveness and cost-effectiveness of shared care: protocol for a realist review.

    PubMed

    Hardwick, Rebecca; Pearson, Mark; Byng, Richard; Anderson, Rob

    2013-02-12

    Shared care (an enhanced information exchange over and above routine outpatient letters) is commonly used to improve care coordination and communication between a specialist and primary care services for people with long-term conditions. Evidence of the effectiveness and cost-effectiveness of shared care is mixed. Informed decision-making for targeting shared care requires a greater understanding of how it works, for whom it works, in what contexts and why. This protocol outlines how realist review methods can be used to synthesise evidence on shared care for long-term conditions.A further aim of the review is to explore economic evaluations of shared care. Economic evaluations are difficult to synthesise due to problems in accounting for contextual differences that impact on resource use and opportunity costs. Realist review methods have been suggested as a way to overcome some of these issues, so this review will also assess whether realist review methods are amenable to synthesising economic evidence. Database and web searching will be carried out in order to find relevant evidence to develop and test programme theories about how shared care works. The review will have two phases. Phase 1 will concentrate on the contextual conditions and mechanisms that influence how shared care works, in order to develop programme theories, which partially explain how it works. Phase 2 will focus on testing these programme theories. A Project Reference Group made up of health service professionals and people with actual experience of long-term conditions will be used to ground the study in real-life experience. Review findings will be disseminated through local and sub-national networks for integrated care and long-term conditions. This realist review will explore why and for whom shared care works, in order to support decision-makers working to improve the effectiveness of care for people outside hospital. The development of realist review methods to take into account cost and

  1. Realistic absorption coefficient of each individual film in a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2015-02-01

    A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.

  2. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments

    PubMed Central

    Slater, Mel

    2009-01-01

    In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not ‘there’ and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality. PMID:19884149

  3. Realistic Simulations of Coronagraphic Observations with WFIRST

    NASA Astrophysics Data System (ADS)

    Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)

    2018-01-01

    We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.

  4. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model’s Motions and Loads Measurement in Realistic Sea Waves

    PubMed Central

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-01-01

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379

  5. Developing Skills: Realistic Work Environments in Further Education. FEDA Reports.

    ERIC Educational Resources Information Center

    Armstrong, Paul; Hughes, Maria

    To establish the prevalence and perceived value of realistic work environments (RWEs) in colleges and their use as learning resources, all further education (FE) sector colleges in Great Britain were surveyed in the summer of 1998. Of 175 colleges that responded to 2 questionnaires for senior college managers and RWE managers, 127 had at least 1…

  6. A new class of methods for functional connectivity estimation

    NASA Astrophysics Data System (ADS)

    Lin, Wutu

    Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.

  7. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  8. Toxicity of environmentally realistic concentrations of chlorpyrifos and terbuthylazine in indoor microcosms.

    PubMed

    Pereira, Ana Santos; Cerejeira, Maria José; Daam, Michiel A

    2017-09-01

    Few studies have been conducted into the evaluation of environmentally realistic pesticide mixtures using model ecosystems. In the present study, the effects of single and combined environmentally realistic concentrations of the herbicide terbuthylazine and the insecticide chlorpyrifos were evaluated using laboratory microcosms. Direct toxic effects of chlorpyrifos were noted on copepod nauplii and cladocerans and the recovery of the latter was likely related with the decrease observed in rotifer abundances. Terbuthylazine potentiated the effect of chlorpyrifos on feeding rates of Daphnia magna, presumably by triggering the transformation of chlorpyrifos to more toxic oxon-analogs. Possible food-web interactions resulting from multiple chemical (and other) stressors likely to be present in edge-of-field water bodies need to be further evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A task-related and resting state realistic fMRI simulator for fMRI data validation

    NASA Astrophysics Data System (ADS)

    Hill, Jason E.; Liu, Xiangyu; Nutter, Brian; Mitra, Sunanda

    2017-02-01

    After more than 25 years of published functional magnetic resonance imaging (fMRI) studies, careful scrutiny reveals that most of the reported results lack fully decisive validation. The complex nature of fMRI data generation and acquisition results in unavoidable uncertainties in the true estimation and interpretation of both task-related activation maps and resting state functional connectivity networks, despite the use of various statistical data analysis methodologies. The goal of developing the proposed STANCE (Spontaneous and Task-related Activation of Neuronally Correlated Events) simulator is to generate realistic task-related and/or resting-state 4D blood oxygenation level dependent (BOLD) signals, given the experimental paradigm and scan protocol, by using digital phantoms of twenty normal brains available from BrainWeb (http://brainweb.bic.mni.mcgill.ca/brainweb/). The proposed simulator will include estimated system and modelled physiological noise as well as motion to serve as a reference to measured brain activities. In its current form, STANCE is a MATLAB toolbox with command line functions serving as an open-source add-on to SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8/). The STANCE simulator has been designed in a modular framework so that the hemodynamic response (HR) and various noise models can be iteratively improved to include evolving knowledge about such models.

  10. Monte Carlo simulation of the operational quantities at the realistic mixed neutron-photon radiation fields CANEL and SIGMA.

    PubMed

    Lacoste, V; Gressier, V

    2007-01-01

    The Institute for Radiological Protection and Nuclear Safety owns two facilities producing realistic mixed neutron-photon radiation fields, CANEL, an accelerator driven moderator modular device, and SIGMA, a graphite moderated americium-beryllium assembly. These fields are representative of some of those encountered at nuclear workplaces, and the corresponding facilities are designed and used for calibration of various instruments, such as survey meters, personal dosimeters or spectrometric devices. In the framework of the European project EVIDOS, irradiations of personal dosimeters were performed at CANEL and SIGMA. Monte Carlo calculations were performed to estimate the reference values of the personal dose equivalent at both facilities. The Hp(10) values were calculated for three different angular positions, 0 degrees, 45 degrees and 75 degrees, of an ICRU phantom located at the position of irradiation.

  11. Power handling of a segmented bulk W tile for JET under realistic plasma scenarios

    NASA Astrophysics Data System (ADS)

    Jet-Efda Contributors Mertens, Ph.; Coenen, J. W.; Eich, T.; Huber, A.; Jachmich, S.; Nicolai, D.; Riccardo, V.; Senik, K.; Samm, U.

    2011-08-01

    A solid tungsten divertor row has been designed for JET in the frame of the ITER-like Wall project (ILW). The plasma-facing tiles are segmented in four stacks of tungsten lamellae oriented in the toroidal direction. Earlier estimations of the expected tile performance were carried out mostly for engineering purposes, to compare the permissible heat load with the power density of 7 MW/m2 originally specified for the ILW as a uniform load for 10 s.The global thermal model developed for the W modules delivers results for more realistic plasma footprints: the poloidal extension of the outer strike point was reduced from the full lamella width of 62 mm to ⩾15 mm. Model validation is given by the experimental exposure of a 1:1 prototype stack in the ion beam facility MARION (incidence ˜6°, load E ⩽ 66 MJ/m2 on the wetted surface). Spreading the deposited energy by appropriate sweeping over one or several stacks in the torus is beneficial for the tungsten lamellae and for the support structure.

  12. Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2016-01-01

    Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.

  13. Standardized Patients Provide Realistic and Worthwhile Experiences for Athletic Training Students

    ERIC Educational Resources Information Center

    Walker, Stacy E.; Weidner, Thomas G.

    2010-01-01

    Context: Standardized patients are more prominently used to both teach and evaluate students' clinical skills and abilities. Objective: To investigate whether athletic training students perceived an encounter with a standardized patient (SP) as realistic and worthwhile and to determine their perceived comfort in future lower extremity evaluations…

  14. Grid occupancy estimation for environment perception based on belief functions and PCR6

    NASA Astrophysics Data System (ADS)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  15. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies

    NASA Astrophysics Data System (ADS)

    Wake, Kanako; Varsier, Nadège; Watanabe, Soichi; Taki, Masao; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-10-01

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  16. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies.

    PubMed

    Wake, Kanako; Varsier, Nadège; Watanabe, Soichi; Taki, Masao; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-10-07

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  17. Realistic Modeling of Multi-Scale MHD Dynamics of the Solar Atmosphere

    NASA Technical Reports Server (NTRS)

    Kitiashvili, Irina; Mansour, Nagi N.; Wray, Alan; Couvidat, Sebastian; Yoon, Seokkwan; Kosovichev, Alexander

    2014-01-01

    Realistic 3D radiative MHD simulations open new perspectives for understanding the turbulent dynamics of the solar surface, its coupling to the atmosphere, and the physical mechanisms of generation and transport of non-thermal energy. Traditionally, plasma eruptions and wave phenomena in the solar atmosphere are modeled by prescribing artificial driving mechanisms using magnetic or gas pressure forces that might arise from magnetic field emergence or reconnection instabilities. In contrast, our 'ab initio' simulations provide a realistic description of solar dynamics naturally driven by solar energy flow. By simulating the upper convection zone and the solar atmosphere, we can investigate in detail the physical processes of turbulent magnetoconvection, generation and amplification of magnetic fields, excitation of MHD waves, and plasma eruptions. We present recent simulation results of the multi-scale dynamics of quiet-Sun regions, and energetic effects in the atmosphere and compare with observations. For the comparisons we calculate synthetic spectro-polarimetric data to model observational data of SDO, Hinode, and New Solar Telescope.

  18. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  19. Modeling the Hyperdistribution of Item Parameters To Improve the Accuracy of Recovery in Estimation Procedures.

    ERIC Educational Resources Information Center

    Matthews-Lopez, Joy L.; Hombo, Catherine M.

    The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…

  20. Flexible functional regression methods for estimating individualized treatment regimes.

    PubMed

    Ciarleglio, Adam; Petkova, Eva; Tarpey, Thaddeus; Ogden, R Todd

    2016-01-01

    A major focus of personalized medicine is on the development of individualized treatment rules. Good decision rules have the potential to significantly advance patient care and reduce the burden of a host of diseases. Statistical methods for developing such rules are progressing rapidly, but few methods have considered the use of pre-treatment functional data to guide in decision-making. Furthermore, those methods that do allow for the incorporation of functional pre-treatment covariates typically make strong assumptions about the relationships between the functional covariates and the response of interest. We propose two approaches for using functional data to select an optimal treatment that address some of the shortcomings of previously developed methods. Specifically, we combine the flexibility of functional additive regression models with Q -learning or A -learning in order to obtain treatment decision rules. Properties of the corresponding estimators are discussed. Our approaches are evaluated in several realistic settings using synthetic data and are applied to real data arising from a clinical trial comparing two treatments for major depressive disorder in which baseline imaging data are available for subjects who are subsequently treated.

  1. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  2. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  3. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGES

    Friedrich, O.; Seitz, S.; Eifler, T. F.; ...

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  4. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  5. Lessons learned in using realist evaluation to assess maternal and newborn health programming in rural Bangladesh.

    PubMed

    Adams, Alayne; Sedalia, Saroj; McNab, Shanon; Sarker, Malabika

    2016-03-01

    Realist evaluation furnishes valuable insight to public health practitioners and policy makers about how and why interventions work or don't work. Moving beyond binary measures of success or failure, it provides a systematic approach to understanding what goes on in the 'Black Box' and how implementation decisions in real life contexts can affect intervention effectiveness. This paper reflects on an experience in applying the tenets of realist evaluation to identify optimal implementation strategies for scale-up of Maternal and Newborn Health (MNH) programmes in rural Bangladesh. Supported by UNICEF, the three MNH programmes under consideration employed different implementation models to deliver similar services and meet similar MNH goals. Programme targets included adoption of recommended antenatal, post-natal and essential newborn care practices; health systems strengthening through improved referral, accountability and administrative systems, and increased community knowledge. Drawing on focused examples from this research, seven steps for operationalizing the realist evaluation approach are offered, while emphasizing the need to iterate and innovate in terms of methods and analysis strategies. The paper concludes by reflecting on lessons learned in applying realist evaluation, and the unique insights it yields regarding implementation strategies for successful MNH programming. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  6. Energy-efficient quantum frequency estimation

    NASA Astrophysics Data System (ADS)

    Liuzzo-Scorpo, Pietro; Correa, Luis A.; Pollock, Felix A.; Górecka, Agnieszka; Modi, Kavan; Adesso, Gerardo

    2018-06-01

    The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a ‘GHZ-diagonal’ state by means of a sequence of qubit gates applied on an ensemble of n atoms in thermal equilibrium. Noise is introduced via a phenomenological time-non-local quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the n-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy {\\boldsymbol{ \\mathcal E }} is limited. This is at stark contrast with standard frequency estimation, where larger probes—more sensitive but also more ‘expensive’ to prepare—are always preferred. Replacing {\\boldsymbol{ \\mathcal E }} by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.

  7. Attitude Estimation for Large Field-of-View Sensors

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Crassidis, John L.; Markley, F. Landis

    2005-01-01

    The QUEST measurement noise model for unit vector observations has been widely used in spacecraft attitude estimation for more than twenty years. It was derived under the approximation that the noise lies in the tangent plane of the respective unit vector and is axially symmetrically distributed about the vector. For large field-of-view sensors, however, this approximation may be poor, especially when the measurement falls near the edge of the field of view. In this paper a new measurement noise model is derived based on a realistic noise distribution in the focal-plane of a large field-of-view sensor, which shows significant differences from the QUEST model for unit vector observations far away from the sensor boresight. An extended Kalman filter for attitude estimation is then designed with the new measurement noise model. Simulation results show that with the new measurement model the extended Kalman filter achieves better estimation performance using large field-of-view sensor observations.

  8. Investigations of a Complex, Realistic Task: Intentional, Unsystematic, and Exhaustive Experimenters

    ERIC Educational Resources Information Center

    McElhaney, Kevin W.; Linn, Marcia C.

    2011-01-01

    This study examines how students' experimentation with a virtual environment contributes to their understanding of a complex, realistic inquiry problem. We designed a week-long, technology-enhanced inquiry unit on car collisions. The unit uses new technologies to log students' experimentation choices. Physics students (n = 148) in six diverse high…

  9. Effects of Minute Contextual Experience on Realistic Assessment of Proportional Reasoning

    ERIC Educational Resources Information Center

    Matney, Gabriel; Jackson, Jack L., II; Bostic, Jonathan

    2013-01-01

    This mixed methods study describes the effects of a "minute contextual experience" on students' ability to solve a realistic assessment problem involving scale drawings and proportional reasoning. Minute contextual experience (MCE) is defined to be a brief encounter with a context in which aspects of the context are explored openly. The…

  10. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  11. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  12. Development of Contextual Mathematics teaching Material integrated related sciences and realistic for students grade xi senior high school

    NASA Astrophysics Data System (ADS)

    Helma, H.; Mirna, M.; Edizon, E.

    2018-04-01

    Mathematics is often applied in physics, chemistry, economics, engineering, and others. Besides that, mathematics is also used in everyday life. Learning mathematics in school should be associated with other sciences and everyday life. In this way, the learning of mathematics is more realstic, interesting, and meaningful. Needs analysis shows that required contextual mathematics teaching materials integrated related sciences and realistic on learning mathematics. The purpose of research is to produce a valid and practical contextual mathematics teaching material integrated related sciences and realistic. This research is development research. The result of this research is a valid and practical contextual mathematics teaching material integrated related sciences and realistic produced

  13. Protocol for a realist review of workplace learning in postgraduate medical education and training.

    PubMed

    Wiese, Anel; Kilty, Caroline; Bergin, Colm; Flood, Patrick; Fu, Na; Horgan, Mary; Higgins, Agnes; Maher, Bridget; O'Kane, Grainne; Prihodova, Lucia; Slattery, Dubhfeasa; Bennett, Deirdre

    2017-01-19

    Postgraduate medical education and training (PGMET) is a complex social process which happens predominantly during the delivery of patient care. The clinical learning environment (CLE), the context for PGMET, shapes the development of the doctors who learn and work within it, ultimately impacting the quality and safety of patient care. Clinical workplaces are complex, dynamic systems in which learning emerges from non-linear interactions within a network of related factors and activities. Those tasked with the design and delivery of postgraduate medical education and training need to understand the relationship between the processes of medical workplace learning and these contextual elements in order to optimise conditions for learning. We propose to conduct a realist synthesis of the literature to address the overarching questions; how, why and in what circumstances do doctors learn in clinical environments? This review is part of a funded projected with the overall aim of producing guidelines and recommendations for the design of high quality clinical learning environments for postgraduate medical education and training. We have chosen realist synthesis as a methodology because of its suitability for researching complexity and producing answers useful to policymakers and practitioners. This realist synthesis will follow the steps and procedures outlined by Wong et al. in the RAMESES Publication Standards for Realist Synthesis and the Realist Synthesis RAMESES Training Materials. The core research team is a multi-disciplinary group of researchers, clinicians and health professions educators. The wider research group includes experts in organisational behaviour and human resources management as well as the key stakeholders; doctors in training, patient representatives and providers of PGMET. This study will draw from the published literature and programme, and substantive, theories of workplace learning, to describe context, mechanism and outcome configurations for

  14. CHARMM-GUI Membrane Builder toward realistic biological membrane simulations.

    PubMed

    Wu, Emilia L; Cheng, Xi; Jo, Sunhwan; Rui, Huan; Song, Kevin C; Dávila-Contreras, Eder M; Qi, Yifei; Lee, Jumin; Monje-Galvan, Viviana; Venable, Richard M; Klauda, Jeffery B; Im, Wonpil

    2014-10-15

    CHARMM-GUI Membrane Builder, http://www.charmm-gui.org/input/membrane, is a web-based user interface designed to interactively build all-atom protein/membrane or membrane-only systems for molecular dynamics simulations through an automated optimized process. In this work, we describe the new features and major improvements in Membrane Builder that allow users to robustly build realistic biological membrane systems, including (1) addition of new lipid types, such as phosphoinositides, cardiolipin (CL), sphingolipids, bacterial lipids, and ergosterol, yielding more than 180 lipid types, (2) enhanced building procedure for lipid packing around protein, (3) reliable algorithm to detect lipid tail penetration to ring structures and protein surface, (4) distance-based algorithm for faster initial ion displacement, (5) CHARMM inputs for P21 image transformation, and (6) NAMD equilibration and production inputs. The robustness of these new features is illustrated by building and simulating a membrane model of the polar and septal regions of E. coli membrane, which contains five lipid types: CL lipids with two types of acyl chains and phosphatidylethanolamine lipids with three types of acyl chains. It is our hope that CHARMM-GUI Membrane Builder becomes a useful tool for simulation studies to better understand the structure and dynamics of proteins and lipids in realistic biological membrane environments. Copyright © 2014 Wiley Periodicals, Inc.

  15. Highly Realistic Training for Navy Corpsmen: A Follow-up Assessment

    DTIC Science & Technology

    2017-10-12

    Highly Realistic Training for Navy Corpsmen: A Follow-Up Assessment Naval Health Research Center Stephanie Booth-Kewley, PhD Renée G...protection of human subjects in research (Protocol NHRC.2013.0019). Naval Health Research Center 140 Sylvester Road San Diego, California 92106...CAPT Marshall R. Monteville, Ph.D., MSC, USN Date Commanding Officer Naval Health Research Center San Diego 140 Sylvester Rd. San Diego, CA

  16. Population of 224 realistic human subject-based computational breast phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, David W.; Wells, Jered R., E-mail: jered.wells@duke.edu; Sturgeon, Gregory M.

    Purpose: To create a database of highly realistic and anatomically variable 3D virtual breast phantoms based on dedicated breast computed tomography (bCT) data. Methods: A tissue classification and segmentation algorithm was used to create realistic and detailed 3D computational breast phantoms based on 230 + dedicated bCT datasets from normal human subjects. The breast volume was identified using a coarse three-class fuzzy C-means segmentation algorithm which accounted for and removed motion blur at the breast periphery. Noise in the bCT data was reduced through application of a postreconstruction 3D bilateral filter. A 3D adipose nonuniformity (bias field) correction was thenmore » applied followed by glandular segmentation using a 3D bias-corrected fuzzy C-means algorithm. Multiple tissue classes were defined including skin, adipose, and several fractional glandular densities. Following segmentation, a skin mask was produced which preserved the interdigitated skin, adipose, and glandular boundaries of the skin interior. Finally, surface modeling was used to produce digital phantoms with methods complementary to the XCAT suite of digital human phantoms. Results: After rejecting some datasets due to artifacts, 224 virtual breast phantoms were created which emulate the complex breast parenchyma of actual human subjects. The volume breast density (with skin) ranged from 5.5% to 66.3% with a mean value of 25.3% ± 13.2%. Breast volumes ranged from 25.0 to 2099.6 ml with a mean value of 716.3 ± 386.5 ml. Three breast phantoms were selected for imaging with digital compression (using finite element modeling) and simple ray-tracing, and the results show promise in their potential to produce realistic simulated mammograms. Conclusions: This work provides a new population of 224 breast phantoms based on in vivo bCT data for imaging research. Compared to previous studies based on only a few prototype cases, this dataset provides a rich source of new cases spanning a

  17. Population of 224 realistic human subject-based computational breast phantoms

    PubMed Central

    Erickson, David W.; Wells, Jered R.; Sturgeon, Gregory M.; Dobbins, James T.; Segars, W. Paul; Lo, Joseph Y.

    2016-01-01

    Purpose: To create a database of highly realistic and anatomically variable 3D virtual breast phantoms based on dedicated breast computed tomography (bCT) data. Methods: A tissue classification and segmentation algorithm was used to create realistic and detailed 3D computational breast phantoms based on 230 + dedicated bCT datasets from normal human subjects. The breast volume was identified using a coarse three-class fuzzy C-means segmentation algorithm which accounted for and removed motion blur at the breast periphery. Noise in the bCT data was reduced through application of a postreconstruction 3D bilateral filter. A 3D adipose nonuniformity (bias field) correction was then applied followed by glandular segmentation using a 3D bias-corrected fuzzy C-means algorithm. Multiple tissue classes were defined including skin, adipose, and several fractional glandular densities. Following segmentation, a skin mask was produced which preserved the interdigitated skin, adipose, and glandular boundaries of the skin interior. Finally, surface modeling was used to produce digital phantoms with methods complementary to the XCAT suite of digital human phantoms. Results: After rejecting some datasets due to artifacts, 224 virtual breast phantoms were created which emulate the complex breast parenchyma of actual human subjects. The volume breast density (with skin) ranged from 5.5% to 66.3% with a mean value of 25.3% ± 13.2%. Breast volumes ranged from 25.0 to 2099.6 ml with a mean value of 716.3 ± 386.5 ml. Three breast phantoms were selected for imaging with digital compression (using finite element modeling) and simple ray-tracing, and the results show promise in their potential to produce realistic simulated mammograms. Conclusions: This work provides a new population of 224 breast phantoms based on in vivo bCT data for imaging research. Compared to previous studies based on only a few prototype cases, this dataset provides a rich source of new cases spanning a wide range

  18. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  19. RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2007-01-01

    Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253

  20. Education in the Anthropocene: Ethico-Moral Dimensions and Critical Realist Openings

    ERIC Educational Resources Information Center

    Olvitt, Lausanne Laura

    2017-01-01

    Human-induced changes in planetary bio-geo-chemical processes have tipped earth into a newly-proposed geological epoch: the Anthropocene, which places moral and ethical demands on people regarding who should take responsibility for the well-being of people and planet, how, and why. Drawing generally on critical realist ontology, and more…

  1. Estimating sales and sales market share from sales rank data for consumer appliances

    NASA Astrophysics Data System (ADS)

    Touzani, Samir; Van Buskirk, Robert

    2016-06-01

    Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.

  2. Critical Realism and Realist Review: Analyzing Complexity in Educational Restructuring and the Limits of Generalizing Program Theories Across Borders

    ERIC Educational Resources Information Center

    De Souza, Denise E.

    2016-01-01

    This article focuses on the design of a critical realist review that deployed Bhaskar's resolution, redescribing, retroduction, eliminating, identifying, and correcting schema and Pawson and Tilley's Context-Mechanism-Outcome configuration underpinned by realist social theory. Methodologically, the review examined the relationship between…

  3. Shape-based approach for the estimation of individual facial mimics in craniofacial surgery planning

    NASA Astrophysics Data System (ADS)

    Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian

    2002-05-01

    Besides the static soft tissue prediction, the estimation of basic facial emotion expressions is another important criterion for the evaluation of craniofacial surgery planning. For a realistic simulation of facial mimics, an adequate biomechanical model of soft tissue including the mimic musculature is needed. In this work, we present an approach for the modeling of arbitrarily shaped muscles and the estimation of basic individual facial mimics, which is based on the geometrical model derived from the individual tomographic data and the general finite element modeling of soft tissue biomechanics.

  4. Refinement of the timing-based estimator of pulsar magnetic fields

    NASA Astrophysics Data System (ADS)

    Biryukov, Anton; Astashenok, Artyom; Beskin, Gregory

    2017-04-01

    Numerical simulations of realistic non-vacuum magnetospheres of isolated neutron stars have shown that pulsar spin-down luminosities depend weakly on the magnetic obliquity α. In particular, L ∝ B2(1 + sin 2α), where B is the magnetic field strength at the star surface. Being the most accurate expression to date, this result provides the opportunity to estimate B for a given radiopulsar with quite a high accuracy. In the current work, we present a refinement of the classical 'magneto-dipolar' formula for pulsar magnetic fields B_md = (3.2× 10^{19} G)√{P\\dot{P}}, where P is the neutron star spin period. The new, robust timing-based estimator is introduced as log B = log Bmd + ΔB(M, α), where the correction ΔB depends on the equation of state (EOS) of dense matter, the individual pulsar obliquity α and the mass M. Adopting state-of-the-art statistics for M and α we calculate the distributions of ΔB for a representative subset of 22 EOSs that do not contradict observations. It has been found that ΔB is distributed nearly normally, with the average in the range -0.5 to -0.25 dex and standard deviation σ[ΔB] ≈ 0.06 to 0.09 dex, depending on the adopted EOS. The latter quantity represents a formal uncertainty of the corrected estimation of log B because ΔB is weakly correlated with log Bmd. At the same time, if it is assumed that every considered EOS has the same chance of occurring in nature, then another, more generalized, estimator B* ≈ 3Bmd/7 can be introduced providing an unbiased value of the pulsar surface magnetic field with ˜30 per cent uncertainty with 68 per cent confidence. Finally, we discuss the possible impact of pulsar timing irregularities on the timing-based estimation of B and review the astrophysical applications of the obtained results.

  5. Comparison between remote sensing and a dynamic vegetation model for estimating terrestrial primary production of Africa.

    PubMed

    Ardö, Jonas

    2015-12-01

    Africa is an important part of the global carbon cycle. It is also a continent facing potential problems due to increasing resource demand in combination with climate change-induced changes in resource supply. Quantifying the pools and fluxes constituting the terrestrial African carbon cycle is a challenge, because of uncertainties in meteorological driver data, lack of validation data, and potentially uncertain representation of important processes in major ecosystems. In this paper, terrestrial primary production estimates derived from remote sensing and a dynamic vegetation model are compared and quantified for major African land cover types. Continental gross primary production estimates derived from remote sensing were higher than corresponding estimates derived from a dynamic vegetation model. However, estimates of continental net primary production from remote sensing were lower than corresponding estimates from the dynamic vegetation model. Variation was found among land cover classes, and the largest differences in gross primary production were found in the evergreen broadleaf forest. Average carbon use efficiency (NPP/GPP) was 0.58 for the vegetation model and 0.46 for the remote sensing method. Validation versus in situ data of aboveground net primary production revealed significant positive relationships for both methods. A combination of the remote sensing method with the dynamic vegetation model did not strongly affect this relationship. Observed significant differences in estimated vegetation productivity may have several causes, including model design and temperature sensitivity. Differences in carbon use efficiency reflect underlying model assumptions. Integrating the realistic process representation of dynamic vegetation models with the high resolution observational strength of remote sensing may support realistic estimation of components of the carbon cycle and enhance resource monitoring, providing suitable validation data is available.

  6. Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model

    NASA Astrophysics Data System (ADS)

    Washington, M. H.; Kumar, S.

    2017-12-01

    The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.

  7. A simulation of small to giant Antarctic iceberg evolution: differential impact on climatology estimates

    NASA Astrophysics Data System (ADS)

    Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas

    2017-04-01

    We present a simulation of Antarctic iceberg drift and melting that includes small (<2.2 km), medium-sized, and giant tabular icebergs with lengths of more than 10km. The model is initialized with a realistic size distribution obtained from satellite observations. Our study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. Taking iceberg modeling a step further, we simulate drift and melting using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's keel. The climatology estimates of Antarctic iceberg melting based on simulations of small, 'small-to-medium'-sized, and small-to-giant icebergs (including icebergs > 10km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.

  8. Effect of Anatomically Realistic Full-Head Model on Activation of Cortical Neurons in Subdural Cortical Stimulation—A Computational Study

    NASA Astrophysics Data System (ADS)

    Seo, Hyeon; Kim, Donghyeon; Jun, Sung Chan

    2016-06-01

    Electrical brain stimulation (EBS) is an emerging therapy for the treatment of neurological disorders, and computational modeling studies of EBS have been used to determine the optimal parameters for highly cost-effective electrotherapy. Recent notable growth in computing capability has enabled researchers to consider an anatomically realistic head model that represents the full head and complex geometry of the brain rather than the previous simplified partial head model (extruded slab) that represents only the precentral gyrus. In this work, subdural cortical stimulation (SuCS) was found to offer a better understanding of the differential activation of cortical neurons in the anatomically realistic full-head model than in the simplified partial-head models. We observed that layer 3 pyramidal neurons had comparable stimulation thresholds in both head models, while layer 5 pyramidal neurons showed a notable discrepancy between the models; in particular, layer 5 pyramidal neurons demonstrated asymmetry in the thresholds and action potential initiation sites in the anatomically realistic full-head model. Overall, the anatomically realistic full-head model may offer a better understanding of layer 5 pyramidal neuronal responses. Accordingly, the effects of using the realistic full-head model in SuCS are compelling in computational modeling studies, even though this modeling requires substantially more effort.

  9. Basketball lay-up - foot loading characteristics and the number of trials necessary to obtain stable plantar pressure variables.

    PubMed

    Chua, YaoHui K; Quek, Raymond K K; Kong, Pui W

    2017-03-01

    This study aimed (1) to profile the plantar loading characteristics when performing the basketball lay-up in a realistic setting and (2) to determine the number of trials necessary to establish a stable mean for plantar loading variables during the lay-up. Thirteen university male basketball players [age: 23.0 (1.4) years, height: 1.75 (0.05) m, mass: 68.4 (8.6) kg] performed ten successful basketball lay-ups from a stationary position. Plantar loading variables were recorded using the Novel Pedar-X in-shoe system. Loading variables including peak force, peak pressure, and pressure-time integral were extracted from eight foot regions. Performance stability of plantar loading variables during the take-off and landing steps were assessed using the sequential averaging technique and intra-class correlation coefficient (ICC). High plantar loadings were experienced at the heel during the take-off steps, and both the heel and forefoot regions upon landing. The sequential estimation technique revealed a five-eight trial range to achieve a stable mean across all plantar loading variables, whereas ICC analysis was insensitive to inter-trial differences of repeated lay-up performances. Future studies and performance evaluation protocols on plantar loading during basketball lay-ups should include at least eight trials to ensure that the measurements obtained are sufficiently stable.

  10. Team training in obstetric and neonatal emergencies using highly realistic simulation in Mexico: impact on process indicators.

    PubMed

    Walker, Dilys; Cohen, Susanna; Fritz, Jimena; Olvera, Marisela; Lamadrid-Figueroa, Hector; Cowan, Jessica Greenberg; Hernandez, Dolores Gonzalez; Dettinger, Julia C; Fahey, Jenifer O

    2014-11-20

    Ineffective management of obstetric emergencies contributes significantly to maternal and neonatal morbidity and mortality in Mexico. PRONTO (Programa de Rescate Obstétrico y Neonatal: Tratamiento Óptimo y Oportuno) is a highly-realistic, low-tech simulation-based obstetric and neonatal emergency training program. A pair-matched hospital-based controlled implementation trial was undertaken in three states in Mexico, with pre/post measurement of process indicators at intervention hospitals. This report assesses the impact of PRONTO simulation training on process indicators from the pre/post study design for process indicators. Data was collected in twelve intervention facilities on process indicators, including pre/post changes in knowledge and self-efficacy of obstetric emergencies and neonatal resuscitation, achievement of strategic planning goals established during training and changes in teamwork scores. Authors performed a longitudinal fixed-effects linear regression model to estimate changes in knowledge and self-efficacy and logistic regression to assess goal achievement. A total of 450 professionals in interprofessional teams were trained. Significant increases in knowledge and self-efficacy were noted for both physicians and nurses (p <0.001- 0.009) in all domains. Teamwork scores improved and were maintained over a three month period. A mean of 58.8% strategic planning goals per team in each hospital were achieved. There was no association between high goal achievement and knowledge, self-efficacy, proportion of doctors or nurses in training, state, or teamwork score. These results suggest that PRONTO's highly realistic, locally appropriate simulation and team training in maternal and neonatal emergency care may be a promising avenue for optimizing emergency response and improving quality of facility-based obstetric and neonatal care in resource-limited settings. NCT01477554.

  11. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  12. Teaching Poor Ethnic Minority Students: A Critical Realist Interpretation of Disempowerment

    ERIC Educational Resources Information Center

    Stylianou, Areti; Scott, David

    2018-01-01

    This article aims to supplement the literature on the role of school context with regards to the disempowerment of teachers in their work with poor ethnic minority students. We use a critical realist framework to analyse the empirical data collected for an in-depth school case study and we suggest the existence of real, interrelated, emergent and…

  13. Estimating catchment-scale groundwater dynamics from recession analysis - enhanced constraining of hydrological models

    NASA Astrophysics Data System (ADS)

    Skaugen, Thomas; Mengistu, Zelalem

    2016-12-01

    In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.

  14. Modeling the Performance Limitations and Prospects of Perovskite/Si Tandem Solar Cells under Realistic Operating Conditions

    PubMed Central

    2017-01-01

    Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells. PMID:28920081

  15. Modeling the Performance Limitations and Prospects of Perovskite/Si Tandem Solar Cells under Realistic Operating Conditions.

    PubMed

    Futscher, Moritz H; Ehrler, Bruno

    2017-09-08

    Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells.

  16. Functional consequences of realistic biodiversity changes in a marine ecosystem

    PubMed Central

    Bracken, Matthew E. S.; Friberg, Sara E.; Gonzalez-Dorantes, Cirse A.; Williams, Susan L.

    2008-01-01

    Declines in biodiversity have prompted concern over the consequences of species loss for the goods and services provided by natural ecosystems. However, relatively few studies have evaluated the functional consequences of realistic, nonrandom changes in biodiversity. Instead, most designs have used randomly selected assemblages from a local species pool to construct diversity gradients. It is therefore difficult, based on current evidence, to predict the functional consequences of realistic declines in biodiversity. In this study, we used tide pool microcosms to demonstrate that the effects of real-world changes in biodiversity may be very different from those of random diversity changes. Specifically, we measured the relationship between the diversity of a seaweed assemblage and its ability to use nitrogen, a key limiting nutrient in nearshore marine systems. We quantified nitrogen uptake using both experimental and model seaweed assemblages and found that natural increases in diversity resulted in enhanced rates of nitrogen use, whereas random diversity changes had no effect on nitrogen uptake. Our results suggest that understanding the real-world consequences of declining biodiversity will require addressing changes in species performance along natural diversity gradients and understanding the relationships between species' susceptibility to loss and their contributions to ecosystem functioning. PMID:18195375

  17. Fiberfox: facilitating the creation of realistic white matter software phantoms.

    PubMed

    Neher, Peter F; Laun, Frederik B; Stieltjes, Bram; Maier-Hein, Klaus H

    2014-11-01

    Phantom-based validation of diffusion-weighted image processing techniques is an important key to innovation in the field and is widely used. Openly available and user friendly tools for the flexible generation of tailor-made datasets for the specific tasks at hand can greatly facilitate the work of researchers around the world. We present an open-source framework, Fiberfox, that enables (1) the intuitive definition of arbitrary artificial white matter fiber tracts, (2) signal generation from those fibers by means of the most recent multi-compartment modeling techniques, and (3) simulation of the actual MR acquisition that allows for the introduction of realistic MRI-related effects into the final image. We show that real acquisitions can be closely approximated by simulating the acquisition of the well-known FiberCup phantom. We further demonstrate the advantages of our framework by evaluating the effects of imaging artifacts and acquisition settings on the outcome of 12 tractography algorithms. Our findings suggest that experiments on a realistic software phantom might change the conclusions drawn from earlier hardware phantom experiments. Fiberfox may find application in validating and further developing methods such as tractography, super-resolution, diffusion modeling or artifact correction. Copyright © 2013 Wiley Periodicals, Inc.

  18. From grid cells to place cells with realistic field sizes

    PubMed Central

    2017-01-01

    While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005

  19. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    PubMed

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  20. Critical reflections on realist review: insights from customizing the methodology to the needs of participatory research assessment.

    PubMed

    Jagosh, Justin; Pluye, Pierre; Wong, Geoff; Cargo, Margaret; Salsberg, Jon; Bush, Paula L; Herbert, Carol P; Green, Lawrence W; Greenhalgh, Trish; Macaulay, Ann C

    2014-06-01

    Realist review has increased in popularity as a methodology for complex intervention assessment. Our experience suggests that the process of designing a realist review requires its customization to areas under investigation. To elaborate on this idea, we first describe the logic underpinning realist review and then present critical reflections on our application experience, organized in seven areas. These are the following: (1) the challenge of identifying middle range theory; (2) addressing heterogeneity and lack of conceptual clarity; (3) the challenge of appraising the quality of complex evidence; (4) the relevance of capturing unintended outcomes; (5) understanding the process of context, mechanism, and outcome (CMO) configuring; (6) incorporating middle-range theory in the CMO configuration process; and (7) using middle range theory to advance the conceptualization of outcomes - both visible and seemingly 'hidden'. One conclusion from our experience is that the degree of heterogeneity of the evidence base will determine whether theory can drive the development of review protocols from the outset, or will follow only after an intense period of data immersion. We hope that presenting a critical reflection on customizing realist review will convey how the methodology can be tailored to the often complex and idiosyncratic features of health research, leading to innovative evidence syntheses. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Protocol: realist synthesis of the impact of unemployment insurance policies on poverty and health.

    PubMed

    Molnar, Agnes; O'Campo, Patricia; Ng, Edwin; Mitchell, Christiane; Muntaner, Carles; Renahy, Emilie; St John, Alexander; Shankardass, Ketan

    2015-02-01

    Unemployment insurance is an important social protection policy that buffers unemployed workers against poverty and poor health. Most unemployment insurance studies focus on whether increases in unemployment insurance generosity are predictive of poverty and health outcomes. Less work has used theory-driven approaches to understand and explain how and why unemployment insurance works, for whom, and under what circumstances. Given this, we present a realist synthesis protocol that seeks to unpack how contextual influences trigger relevant mechanisms to generate poverty and health outcomes. In this protocol, we conceptualize unemployment insurance as a key social protection policy; provide a supporting rationale on the need for a realist synthesis; and describe our process on identifying context-mechanism-outcome pattern configurations. Six methodological steps are described: initial theory development, search strategy; selection and appraisal of documents; data extraction; analysis and synthesis process; and presentation and dissemination of revised theory. Our forthcoming realist synthesis will be the first to build and test theory on the intended and unintended outcomes of unemployment insurance policies. Anticipated findings will allow policymakers to move beyond 'black box' approaches to consider 'mechanism-based' explanations that explicate the logic on how and why unemployment insurance matters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  3. Sparse EEG/MEG source estimation via a group lasso

    PubMed Central

    Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor

    2017-01-01

    Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790

  4. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    USGS Publications Warehouse

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  5. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  6. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  7. Fast rotating neutron stars with realistic nuclear matter equation of state

    NASA Astrophysics Data System (ADS)

    Cipolletta, F.; Cherubini, C.; Filippi, S.; Rueda, J. A.; Ruffini, R.

    2015-07-01

    We construct equilibrium configurations of uniformly rotating neutron stars for selected relativistic mean-field nuclear matter equations of state (EOS). We compute, in particular, the gravitational mass (M ), equatorial (Req) and polar (Rpol) radii, eccentricity, angular momentum (J ), moment of inertia (I ) and quadrupole moment (M2) of neutron stars stable against mass shedding and secular axisymmetric instability. By constructing the constant frequency sequence f =716 Hz of the fastest observed pulsar, PSR J1748-2446ad, and constraining it to be within the stability region, we obtain a lower mass bound for the pulsar, Mmin=[1.2 - 1.4 ]M⊙ , for the EOS employed. Moreover, we give a fitting formula relating the baryonic mass (Mb) and gravitational mass of nonrotating neutron stars, Mb/M⊙=M /M⊙+(13 /200 )(M /M⊙)2 [or M /M⊙=Mb/M⊙-(1 /20 )(Mb/M⊙)2], which is independent of the EOS. We also obtain a fitting formula, although not EOS independent, relating the gravitational mass and the angular momentum of neutron stars along the secular axisymmetric instability line for each EOS. We compute the maximum value of the dimensionless angular momentum, a /M ≡c J /(G M2) (or "Kerr parameter"), (a /M )max≈0.7 , found to be also independent of the EOS. We then compare and contrast the quadrupole moment of rotating neutron stars with the one predicted by the Kerr exterior solution for the same values of mass and angular momentum. Finally, we show that, although the mass quadrupole moment of realistic neutron stars never reaches the Kerr value, the latter is closely approached from above at the maximum mass value, as physically expected from the no-hair theorem. In particular, the stiffer the EOS, the closer the mass quadrupole moment approaches the value of the Kerr solution.

  8. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  9. Stochastic estimation of human shoulder impedance with robots: an experimental design.

    PubMed

    Park, Kyungbin; Chang, Pyung Hun

    2011-01-01

    Previous studies assumed the shoulder as a hinge joint during human arm impedance measurement. This is obviously a vast simplification since the shoulder is a complex of several joints with multiple degrees of freedom. In the present work, a practical methodology for more general and realistic estimation of human shoulder impedance is proposed and validated with a spring array. It includes a gravity compensation scheme, which is developed and used for the experiments with a spatial three degrees of freedom PUMA-type robot. The experimental results were accurate and reliable, and thus it has shown a strong potential of the proposed methodology in the estimation of human shoulder impedance. © 2011 IEEE

  10. Global Paleobathymetry Reconstruction with Realistic Shelf-Slope and Sediment Wedge

    NASA Astrophysics Data System (ADS)

    Goswami, A.; Hinnov, L. A.; Gnanadesikan, A.; Olson, P.

    2013-12-01

    We present paleo-ocean bathymetry reconstructions in a 0.1°x0.1° resolution, using simple geophysical models (Plate Model Equation for oceanic lithosphere), published ages of the ocean floor (Müller et al. 2008), and modern world sediment thickness data (Divins 2003). The motivation is to create realistic paleobathymetry to understand the effect of ocean floor roughness on tides and heat transport in paleoclimate simulations. The values for the parameters in the Plate Model Equation are deduced from Crosby et al. (2006) and are used together with ocean floor age to model Depth to Basement. On top of the Depth to Basement, we added an isostatically adjusted multilayer sediment layer, as indicated from sediment thickness data of the modern oceans and marginal seas (Divins 2003). We also created another version of the sediment layer from the Müller et al. dataset. The Depth to Basement with the appropriate sediment layer together represent a realistic paleobathymetry. A Sediment Wedge was modeled to complement the reconstructed paleobathymetry by extending it to the coastlines. In this process we added a modeled Continental Shelf and Continental Slope to match the extent of the reconstructed paleobathymetry. The Sediment Wedge was prepared by studying the modern ocean where a complete history of seafloor spreading is preserved (north, south and central Atlantic Ocean, Southern Ocean between Australia-Antarctica, and the Pacific Ocean off the west coast of South America). The model takes into account the modern continental shelf-slope structure (as evident from ETOPO1/ETOPO5), tectonic margin type (active vs. passive margin) and age of the latest tectonic activity (USGS & CGMW). Once the complete ocean bathymetry is modeled, we combine it with PALEOMAP (Scotese, 2011) continental reconstructions to produce global paleoworld elevation-bathymetry maps. Modern time (00 Ma) was assumed as a test case. Using the above-described methodology we reconstructed modern ocean

  11. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  12. The Estimation of Precisions in the Planning of Uas Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Passoni, D.; Federici, B.; Ferrando, I.; Gagliolo, S.; Sguerso, D.

    2018-05-01

    The Unmanned Aerial System (UAS) is widely used in the photogrammetric surveys both of structures and of small areas. Geomatics focuses the attention on the metric quality of the final products of the survey, creating several 3D modelling applications from UAS images. As widely known, the quality of results derives from the quality of images acquisition phase, which needs an a priori estimation of the expected precisions. The planning phase is typically managed using dedicated tools, adapted from the traditional aerial-photogrammetric flight plan. But UAS flight has features completely different from the traditional one. Hence, the use of UAS for photogrammetric applications today requires a growth in knowledge in planning. The basic idea of this research is to provide a drone photogrammetric flight planning tools considering the required metric precisions, given a priori the classical parameters of a photogrammetric planning: flight altitude, overlaps and geometric parameters of the camera. The created "office suite" allows a realistic planning of a photogrammetric survey, starting from an approximate knowledge of the Digital Surface Model (DSM), and the effective attitude parameters, changing along the route. The planning products are the overlapping of the images, the Ground Sample Distance (GSD) and the precision on each pixel taking into account the real geometry. The different tested procedures, the obtained results and the solution proposed for the a priori estimates of the precisions in the particular case of UAS surveys are here reported.

  13. What works for whom in pharmacist-led smoking cessation support: realist review.

    PubMed

    Greenhalgh, Trisha; Macfarlane, Fraser; Steed, Liz; Walton, Robert

    2016-12-16

    New models of primary care are needed to address funding and staffing pressures. We addressed the research question "what works for whom in what circumstances in relation to the role of community pharmacies in providing lifestyle interventions to support smoking cessation?" This is a realist review conducted according to RAMESES standards. We began with a sample of 103 papers included in a quantitative review of community pharmacy intervention trials identified through systematic searching of seven databases. We supplemented this with additional papers: studies that had been excluded from the quantitative review but which provided rigorous and relevant additional data for realist theorising; citation chaining (pursuing reference lists and Google Scholar forward tracking of key papers); the 'search similar citations' function on PubMed. After mapping what research questions had been addressed by these studies and how, we undertook a realist analysis to identify and refine candidate theories about context-mechanism-outcome configurations. Our final sample consisted of 66 papers describing 74 studies (12 systematic reviews, 6 narrative reviews, 18 RCTs, 1 process detail of a RCT, 1 cost-effectiveness study, 12 evaluations of training, 10 surveys, 8 qualitative studies, 2 case studies, 2 business models, 1 development of complex intervention). Most studies had been undertaken in the field of pharmacy practice (pharmacists studying what pharmacists do) and demonstrated the success of pharmacist training in improving confidence, knowledge and (in many but not all studies) patient outcomes. Whilst a few empirical studies had applied psychological theories to account for behaviour change in pharmacists or people attempting to quit, we found no studies that had either developed or tested specific theoretical models to explore how pharmacists' behaviour may be affected by organisational context. Because of the nature of the empirical data, only a provisional realist analysis

  14. Realistic versus Schematic Interactive Visualizations for Learning Surveying Practices: A Comparative Study

    ERIC Educational Resources Information Center

    Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen

    2014-01-01

    Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…

  15. Order Matters: Sequencing Scale-Realistic versus Simplified Models to Improve Science Learning

    ERIC Educational Resources Information Center

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-01-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning…

  16. Return period estimates for European windstorm clusters: a multi-model perspective

    NASA Astrophysics Data System (ADS)

    Renggli, Dominik; Zimmerli, Peter

    2017-04-01

    Clusters of storms over Europe can lead to very large aggregated losses. Realistic return period estimates for such cluster are therefore of vital interest to the (re)insurance industry. Such return period estimates are usually derived from historical storm activity statistics of the last 30 to 40 years. However, climate models provide an alternative source, potentially representing thousands of simulated storm seasons. In this study, we made use of decadal hindcast data from eight different climate models in the CMIP5 archive. We used an objective tracking algorithm to identify individual windstorms in the climate model data. The algorithm also computes a (population density weighted) Storm Severity Index (SSI) for each of the identified storms (both on a continental and more regional basis). We derived return period estimates for the cluster seasons 1990, 1999, 2013/2014 and 1884 in the following way: For each climate model, we extracted two different exceedance frequency curves. The first describes the exceedance frequency (or the return period as the inverse of it) of a given SSI level due to an individual storm occurrence. The second describes the exceedance frequency of the seasonally aggregated SSI level (i.e. the sum of the SSI values of all storms in a given season). Starting from appropriate return period assumptions for each individual storm of a historical cluster (e.g. Anatol, Lothar and Martin in 1999) and using the first curve, we extracted the SSI levels at the corresponding return periods. Summing these SSI values results in the seasonally aggregated SSI value. Combining this with the second (aggregated) exceedance frequency curve results in return period estimate of the historical cluster season. Since we do this for each model separately, we obtain eight different return period estimates for each historical cluster. In this way, we obtained the following return period estimates: 50 to 80 years for the 1990 season, 20 to 45 years for the 1999

  17. Biomass particle models with realistic morphology and resolved microstructure for simulations of intraparticle transport phenomena

    DOE PAGES

    Ciesielski, Peter N.; Crowley, Michael F.; Nimlos, Mark R.; ...

    2014-12-09

    Biomass exhibits a complex microstructure of directional pores that impact how heat and mass are transferred within biomass particles during conversion processes. However, models of biomass particles used in simulations of conversion processes typically employ oversimplified geometries such as spheres and cylinders and neglect intraparticle microstructure. In this study, we develop 3D models of biomass particles with size, morphology, and microstructure based on parameters obtained from quantitative image analysis. We obtain measurements of particle size and morphology by analyzing large ensembles of particles that result from typical size reduction methods, and we delineate several representative size classes. Microstructural parameters, includingmore » cell wall thickness and cell lumen dimensions, are measured directly from micrographs of sectioned biomass. A general constructive solid geometry algorithm is presented that produces models of biomass particles based on these measurements. Next, we employ the parameters obtained from image analysis to construct models of three different particle size classes from two different feedstocks representing a hardwood poplar species ( Populus tremuloides, quaking aspen) and a softwood pine ( Pinus taeda, loblolly pine). Finally, we demonstrate the utility of the models and the effects explicit microstructure by performing finite-element simulations of intraparticle heat and mass transfer, and the results are compared to similar simulations using traditional simplified geometries. In conclusion, we show how the behavior of particle models with more realistic morphology and explicit microstructure departs from that of spherical models in simulations of transport phenomena and that species-dependent differences in microstructure impact simulation results in some cases.« less

  18. Fast, Automated, Photo realistic, 3D Modeling of Building Interiors

    DTIC Science & Technology

    2016-09-12

    project, we developed two algorithmic pipelines for GPS-denied indoor mobile 3D mapping using an ambulatory backpack system. By mounting scanning...equipment on a backpack system, a human operator can traverse the interior of a building to produce a high-quality 3D reconstruction. In each of our...Unlimited UU UU UU UU 12-09-2016 1-May-2011 30-Jun-2015 Final Report: Fast, Automated, Photo-realistic, 3D Modeling of Building Interiors (ATTN

  19. Generation of realistic virtual nodules based on three-dimensional spatial resolution in lung computed tomography: A pilot phantom study.

    PubMed

    Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2017-10-01

    The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from

  20. A novel wide-field-of-view display method with higher central resolution for hyper-realistic head dome projector

    NASA Astrophysics Data System (ADS)

    Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko

    2007-02-01

    In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.

  1. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  2. Estimating sturgeon abundance in the Carolinas using side-scan sonar

    USGS Publications Warehouse

    Flowers, H. Jared; Hightower, Joseph E.

    2015-01-01

    Sturgeons (Acipenseridae) are one of the most threatened taxa worldwide, including species in North Carolina and South Carolina. Populations of Atlantic Sturgeon Acipenser oxyrinchus in the Carolinas have been significantly reduced from historical levels by a combination of intense fishing and habitat loss. There is a need for estimates of current abundance, to describe status, and for estimates of historical abundance in order to provide realistic recovery goals. In this study we used N-mixture and distance models with data acquired from side-scan sonar surveys to estimate abundance of sturgeon in six major sturgeon rivers in North Carolina and South Carolina. Estimated abundances of sturgeon greater than 1 m TL in the Carolina distinct population segment (DPS) were 2,031 using the count model and 1,912 via the distance model. The Pee Dee River had the highest overall abundance of any river at 1,944 (count model) or 1,823 (distance model). These estimates do not account for sturgeon less than 1 m TL or occurring in riverine reaches not surveyed or in marine waters. Comparing the two models, the N-mixture model produced similar estimates using less data than the distance model with only a slight reduction of estimated precision.

  3. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of

  4. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    USGS Publications Warehouse

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  5. Satisfaction and sustainability: a realist review of decentralized models of perinatal surgery for rural women.

    PubMed

    Kornelsen, Jude; McCartney, Kevin; Williams, Kim

    2016-01-01

    This article was developed as part of a larger realist review investigating the viability and efficacy of decentralized models of perinatal surgical services for rural women in the context of recent and ongoing service centralization witnessed in many developed nations. The larger realist review was commissioned by the British Columbia Ministry of Health and Perinatal Services of British Columbia, Canada. Findings from that review are addressed in this article specific to the sustainability of rural perinatal surgical sites and the satisfaction of providers that underpins their recruitment to and retention at such sites. A realist method was used in the selection and analysis of literature with the intention to iteratively develop a sophisticated understanding of how perinatal surgical services can best meet the needs of women who live in rural and remote environments. The goal of a realist review is to examine what works for whom under what circumstances and why. The high sensitivity search used language (English) and year (since 1990) limiters in keeping with both a realist and rapid review tradition of using reasoned contextual boundaries. No exclusions were made based on methodology or methodological approach in keeping with a realist review. Databases searched included MEDLINE, PubMed, EBSCO, CINAHL, EBM Reviews, NHS Economic Evaluation Database and PAIS International for literature in December 2013. Database searching produced 103 included academic articles. A further 59 resources were added through pearling and 13 grey literature reports were added on recommendation from the commissioner. A total of 42 of these 175 articles were included in this article as specific to provider satisfaction and service sustainability. Operative perinatal practice was found to be a lynchpin of sustainable primary and surgical services in rural communities. Rural shortages of providers, including challenges with recruitment and retention, were found to be a complex issue, with

  6. The Effect of Realistic Versus Imaginary Aggressive Models of Children's Interpersonal Play

    ERIC Educational Resources Information Center

    Hapkiewicz, Walter G.; Stone, Robert D.

    1974-01-01

    One hundred eighty elementary school children were randomly assigned to same sex pairs and randomly assigned to one of three treatment groups: real-life aggressive film, aggressive cartoon, or nonaggressive film. Results reveal that boys who viewed the realistic aggressive film were significantly more aggressive in play than boys who viewed the…

  7. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Access to primary care for socio-economically disadvantaged older people in rural areas: exploring realist theory using structural equation modelling in a linked dataset.

    PubMed

    Ford, John A; Jones, Andy; Wong, Geoff; Clark, Allan; Porter, Tom; Steel, Nick

    2018-06-19

    Realist approaches seek to answer questions such as 'how?', 'why?', 'for whom?', 'in what circumstances?' and 'to what extent?' interventions 'work' using context-mechanism-outcome (CMO) configurations. Quantitative methods are not well-established in realist approaches, but structural equation modelling (SEM) may be useful to explore CMO configurations. Our aim was to assess the feasibility and appropriateness of SEM to explore CMO configurations and, if appropriate, make recommendations based on our access to primary care research. Our specific objectives were to map variables from two large population datasets to CMO configurations from our realist review looking at access to primary care, generate latent variables where needed, and use SEM to quantitatively test the CMO configurations. A linked dataset was created by merging individual patient data from the English Longitudinal Study of Ageing and practice data from the GP Patient Survey. Patients registered in rural practices and who were in the highest deprivation tertile were included. Three latent variables were defined using confirmatory factor analysis. SEM was used to explore the nine full CMOs. All models were estimated using robust maximum likelihoods and accounted for clustering at practice level. Ordinal variables were treated as continuous to ensure convergence. We successfully explored our CMO configurations, but analysis was limited because of data availability. Two hundred seventy-six participants were included. We found a statistically significant direct (context to outcome) or indirect effect (context to outcome via mechanism) for two of nine CMOs. The strongest association was between 'ease of getting through to the surgery' and 'being able to get an appointment' with an indirect mediated effect through convenience (proportion of the indirect effect of the total was 21%). Healthcare experience was not directly associated with getting an appointment, but there was a statistically significant

  9. HackAttack: Game-Theoretic Analysis of Realistic Cyber Conflicts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Brady, Andrew C; Brady, Ethan J

    Game theory is appropriate for studying cyber conflict because it allows for an intelligent and goal-driven adversary. Applications of game theory have led to a number of results regarding optimal attack and defense strategies. However, the overwhelming majority of applications explore overly simplistic games, often ones in which each participant s actions are visible to every other participant. These simplifications strip away the fundamental properties of real cyber conflicts: probabilistic alerting, hidden actions, unknown opponent capabilities. In this paper, we demonstrate that it is possible to analyze a more realistic game, one in which different resources have different weaknesses, playersmore » have different exploits, and moves occur in secrecy, but they can be detected. Certainly, more advanced and complex games are possible, but the game presented here is more realistic than any other game we know of in the scientific literature. While optimal strategies can be found for simpler games using calculus, case-by-case analysis, or, for stochastic games, Q-learning, our more complex game is more naturally analyzed using the same methods used to study other complex games, such as checkers and chess. We define a simple evaluation function and employ multi-step searches to create strategies. We show that such scenarios can be analyzed, and find that in cases of extreme uncertainty, it is often better to ignore one s opponent s possible moves. Furthermore, we show that a simple evaluation function in a complex game can lead to interesting and nuanced strategies.« less

  10. A generic framework to simulate realistic lung, liver and renal pathologies in CT imaging

    NASA Astrophysics Data System (ADS)

    Solomon, Justin; Samei, Ehsan

    2014-11-01

    Realistic three-dimensional (3D) mathematical models of subtle lesions are essential for many computed tomography (CT) studies focused on performance evaluation and optimization. In this paper, we develop a generic mathematical framework that describes the 3D size, shape, contrast, and contrast-profile characteristics of a lesion, as well as a method to create lesion models based on CT data of real lesions. Further, we implemented a technique to insert the lesion models into CT images in order to create hybrid CT datasets. This framework was used to create a library of realistic lesion models and corresponding hybrid CT images. The goodness of fit of the models was assessed using the coefficient of determination (R2) and the visual appearance of the hybrid images was assessed with an observer study using images of both real and simulated lesions and receiver operator characteristic (ROC) analysis. The average R2 of the lesion models was 0.80, implying that the models provide a good fit to real lesion data. The area under the ROC curve was 0.55, implying that the observers could not readily distinguish between real and simulated lesions. Therefore, we conclude that the lesion-modeling framework presented in this paper can be used to create realistic lesion models and hybrid CT images. These models could be instrumental in performance evaluation and optimization of novel CT systems.

  11. Evaluating impact of clinical guidelines using a realist evaluation framework.

    PubMed

    Reddy, Sandeep; Wakerman, John; Westhorp, Gill; Herring, Sally

    2015-12-01

    The Remote Primary Health Care Manuals (RPHCM) project team manages the development and publication of clinical protocols and procedures for primary care clinicians practicing in remote Australia. The Central Australian Rural Practitioners Association Standard Treatment Manual, the flagship manual of the RPHCM suite, has been evaluated for accessibility and acceptability in remote clinics three times in its 20-year history. These evaluations did not consider a theory-based framework or a programme theory, resulting in some limitations with the evaluation findings. With the RPHCM having an aim of enabling evidence-based practice in remote clinics and anecdotally reported to do so, testing this empirically for the full suite is vital for both stakeholders and future editions of the RPHCM. The project team utilized a realist evaluation framework to assess how, why and for what the RPHCM were being used by remote practitioners. A theory regarding the circumstances in which the manuals have and have not enabled evidence-based practice in the remote clinical context was tested. The project assessed this theory for all the manuals in the RPHCM suite, across government and aboriginal community-controlled clinics, in three regions of Australia. Implementing a realist evaluation framework to generate robust findings in this context has required innovation in the evaluation design and adaptation by researchers. This article captures the RPHCM team's experience in designing this evaluation. © 2015 John Wiley & Sons, Ltd.

  12. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  13. A continuous family of realistic SUSY SU(5) GUTs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bajc, Borut, E-mail: borut.bajc@ijs.si

    2016-06-21

    It is shown that the minimal renormalizable supersymmetric SU(5) is still realistic providing the supersymmetric scale is at least few tens of TeV or large R-parity violating terms are considered. In the first case the vacuum is metastable, and different consistency constraints can give a bounded allowed region in the tan β − m{sub susy} plane. In the second case the mass eigenstate electron (down quark) is a linear combination of the original electron (down quark) and Higgsino (heavy colour triplet), and the mass ratio of bino and wino is determined. Both limits lead to light gravitino dark matter.

  14. Profile of a leader. Mary Agnes Snively: realistic optimist.

    PubMed

    Mansell, D

    1999-01-01

    This paper examines the leadership Mary Agnes Snively gave to Canadian nursing during the late-nineteenth and early-twentieth century with a particular focus on her practical views regarding nursing education. Although surrounded by the Victorian values of her day, Snively developed a vision of nursing education that was both optimistic and realistic. This investigation of Snively's ideas as they were articulated in papers she presented to the American Society of Superintendents of Training Schools for Nurses in 1895 and 1898, is further testament to the validity of the accolade, "Mother of Nurses in Canada," given her in 1924 by her biographer.

  15. 21 CFR 1315.34 - Obtaining an import quota.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Obtaining an import quota. 1315.34 Section 1315.34 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE IMPORTATION AND PRODUCTION QUOTAS... imports, the estimated medical, scientific, and industrial needs of the United States, the establishment...

  16. Realists, Radicals, and Rainbows. The Twenty-Eighth Amy Morris Homans Lecture 1994.

    ERIC Educational Resources Information Center

    Bennett, Roberta S.

    1995-01-01

    Challenges physical education professionals to be realists who name the conditions around them that divide according to group identity and thus perpetuate injustice; to be radicals who work to change conditions; and to build and follow a rainbow path to a future where social justice, human rights, and the human condition are first priorities. (JB)

  17. Modelling of seasonal influenza and estimation of the burden in Tunisia.

    PubMed

    Chlif, S; Aissi, W; Bettaieb, J; Kharroubi, G; Nouira, M; Yazidi, R; El Moussi, A; Maazaoui, L; Slim, A; Salah, A Ben

    2016-10-02

    The burden of influenza was estimated from surveillance data in Tunisia using epidemiological parameters of transmission with WHO classical tools and mathematical modelling. The incidence rates of influenza-associated influenza-like illness (ILI) per 100 000 were 18 735 in 2012/2013 season; 5536 in 2013/14 and 12 602 in 2014/15. The estimated proportions of influenza-associated ILI in the total outpatient load were 3.16%; 0.86% and 1.98% in the 3 seasons respectively. Distribution of influenza viruses among positive patients was: A(H3N2) 15.5%; A(H1N1)pdm2009 39.2%; and B virus 45.3% in 2014/2015 season. From the estimated numbers of symptomatic cases, we estimated that the critical proportions of the population that should be vaccinated were 15%, 4% and 10% respectively. Running the model for the different values of R0, we quantified the number of symptomatic clinical cases, the clinical attack rates, the symptomatic clinical attack rates and the number of deaths. More realistic versions of this model and improved estimates of parameters from surveillance data will strengthen the estimation of the burden of influenza.

  18. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  19. Orbiting passive microwave sensor simulation applied to soil moisture estimation

    NASA Technical Reports Server (NTRS)

    Newton, R. W. (Principal Investigator); Clark, B. V.; Pitchford, W. M.; Paris, J. F.

    1979-01-01

    A sensor/scene simulation program was developed and used to determine the effects of scene heterogeneity, resolution, frequency, look angle, and surface and temperature relations on the performance of a spaceborne passive microwave system designed to estimate soil water information. The ground scene is based on classified LANDSAT images which provide realistic ground classes, as well as geometries. It was determined that the average sensitivity of antenna temperature to soil moisture improves as the antenna footprint size increased. Also, the precision (or variability) of the sensitivity changes as a function of resolution.

  20. Quest for a Realistic In Vivo Test Method for Antimicrobial Hand-Rub Agents: Introduction of a Low-Volume Hand Contamination Procedure▿

    PubMed Central

    Macinga, David R.; Beausoleil, Christopher M.; Campbell, Esther; Mulberry, Gayle; Brady, Ann; Edmonds, Sarah L.; Arbogast, James W.

    2011-01-01

    A novel method has been developed for the evaluation of alcohol-based hand rubs (ABHR) that employs a hand contamination procedure that more closely simulates the in-use conditions of ABHR. Hands of human subjects were contaminated with 0.2 ml of a concentrated suspension of Serratia marcescens (ATCC 14756) to achieve baseline contamination between 8 and 9 log10 CFU/hand while allowing product to be applied to dry hands with minimal soil load. Evaluation of 1.5 ml of an ABHR gel containing 62% ethanol produced log10 reductions of 2.66 ± 0.96, 2.40 ± 0.50, 2.41 ± 0.61, and 2.33 ± 0.49 (means ± standard deviations) after 1, 3, 7, and 10 successive contamination/product application cycles. In a study comparing this low-volume contamination (LVC) method to ASTM E1174, product dry times were more realistic and log10 reductions achieved by the ABHR were significantly greater when LVC was employed (P < 0.05). These results indicate that a novel low-volume hand contamination procedure, which more closely represents ABHR use conditions, provides more realistic estimates of in-use ABHR efficacies. Based on the LVC method, log10 reductions produced by ABHR were strongly dependent on the test product application volume (P < 0.0001) but were not influenced by the alcohol concentration when it was within the range of 62 to 85% (P = 0.378). PMID:22003004

  1. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  2. Oxygen transfer rate estimation in oxidation ditches from clean water measurements.

    PubMed

    Abusam, A; Keesman, K J; Meinema, K; Van Straten, G

    2001-06-01

    Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).

  3. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  4. A nonuniform popularity-similarity optimization (nPSO) model to efficiently generate realistic complex networks with communities

    NASA Astrophysics Data System (ADS)

    Muscoloni, Alessandro; Vittorio Cannistraci, Carlo

    2018-05-01

    The investigation of the hidden metric space behind complex network topologies is a fervid topic in current network science and the hyperbolic space is one of the most studied, because it seems associated to the structural organization of many real complex systems. The popularity-similarity-optimization (PSO) model simulates how random geometric graphs grow in the hyperbolic space, generating realistic networks with clustering, small-worldness, scale-freeness and rich-clubness. However, it misses to reproduce an important feature of real complex networks, which is the community organization. The geometrical-preferential-attachment (GPA) model was recently developed in order to confer to the PSO also a soft community structure, which is obtained by forcing different angular regions of the hyperbolic disk to have a variable level of attractiveness. However, the number and size of the communities cannot be explicitly controlled in the GPA, which is a clear limitation for real applications. Here, we introduce the nonuniform PSO (nPSO) model. Differently from GPA, the nPSO generates synthetic networks in the hyperbolic space where heterogeneous angular node attractiveness is forced by sampling the angular coordinates from a tailored nonuniform probability distribution (for instance a mixture of Gaussians). The nPSO differs from GPA in other three aspects: it allows one to explicitly fix the number and size of communities; it allows one to tune their mixing property by means of the network temperature; it is efficient to generate networks with high clustering. Several tests on the detectability of the community structure in nPSO synthetic networks and wide investigations on their structural properties confirm that the nPSO is a valid and efficient model to generate realistic complex networks with communities.

  5. Helioseismology of a Realistic Magnetoconvective Sunspot Simulation

    NASA Technical Reports Server (NTRS)

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L., Jr.

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  6. Spatial performance of RegEM climate field reconstruction techniques in a realistic pseudoproxy context

    NASA Astrophysics Data System (ADS)

    Wang, J.; Emile-Geay, J.; Guillot, D.

    2011-12-01

    Several methods of climate field reconstructions (CFRs) have been introduced in the past few years to estimate past climate variability from proxy data over the Common Era. The pseudoproxy framework has become a tool of choice for assessing the relative merits of such methods. Here we compare four variants of the RegEM algorithm [Schneider, 2001], using a pseudoproxy network mimicking the key spatio-temporal characteristics of the network of Mann et al., 2008 (hereinafter M08); the methods are (1) RegEM TTLS (2) RegEM iTTLS (3) GraphEM and (4) RegEM iRIDGE. To ensure continuity with previous work [Smerdon et al. 2011], pseudoproxy series are designed as a white-noise degraded version of the simulated temperature field [Amman et al. 2007] over 850-1980 C.E. colocated with 1138 M08 proxies. We use signal-to-noise ratios (SNRs) of: ∞ (no noise), 1.0, 0.5 and 0.25, to simulate differences in proxy quality. Two novelties in pseudoproxy design are introduced here: (1) the decrease in proxy availability over time follows that found in M08, (2) a realistic case where the SNR is empirically derived from correlations between each M08 proxy and the HadCRUT3v temperature field. It is found that this realistic SNR is clustered around 0.3, but ranges from 0.1 to 0.8. Verification statistics such as RE, CE, r2, bias, standard deviation ratio and RMSE are presented for each method at each SNR level. The results show that all methods perform relatively well at SNR levels higher than 0.5, but display drastically different performances at lower SNR levels. Compared with results using pseudoproxy network of Mann et al., 1998, (hereinafter MBH98), the reconstruction skill of the M08 network is relatively improved, in line with the findings of Smerdon et al., 2011. Overall, we find that GraphEM and iTTLS tend to produce more robust estimates of the temperature field at low SNR levels than other schemes, while preserving a higher amount of variance in the target field. Ammann, C. M., F

  7. Realist identification of group-level latent variables for perinatal social epidemiology theory building.

    PubMed

    Eastwood, John Graeme; Jalaludin, Bin Badrudin; Kemp, Lynn Ann; Phung, Hai Ngoc

    2014-01-01

    We have previously reported in this journal on an ecological study of perinatal depressive symptoms in South Western Sydney. In that article, we briefly reported on a factor analysis that was utilized to identify empirical indicators for analysis. In this article, we report on the mixed method approach that was used to identify those latent variables. Social epidemiology has been slow to embrace a latent variable approach to the study of social, political, economic, and cultural structures and mechanisms, partly for philosophical reasons. Critical realist ontology and epistemology have been advocated as an appropriate methodological approach to both theory building and theory testing in the health sciences. We describe here an emergent mixed method approach that uses qualitative methods to identify latent constructs followed by factor analysis using empirical indicators chosen to measure identified qualitative codes. Comparative analysis of the findings is reported together with a limited description of realist approaches to abstract reasoning.

  8. Estimation of genealogical coancestry in plant species using a pedigree reconstruction algorithm and application to an oil palm breeding population.

    PubMed

    Cros, David; Sánchez, Leopoldo; Cochard, Benoit; Samper, Patrick; Denis, Marie; Bouvet, Jean-Marc; Fernández, Jesús

    2014-04-01

    Explicit pedigree reconstruction by simulated annealing gave reliable estimates of genealogical coancestry in plant species, especially when selfing rate was lower than 0.6, using a realistic number of markers. Genealogical coancestry information is crucial in plant breeding to estimate genetic parameters and breeding values. The approach of Fernández and Toro (Mol Ecol 15:1657-1667, 2006) to estimate genealogical coancestries from molecular data through pedigree reconstruction was limited to species with separate sexes. In this study it was extended to plants, allowing hermaphroditism and monoecy, with possible selfing. Moreover, some improvements were made to take previous knowledge on the population demographic history into account. The new method was validated using simulated and real datasets. Simulations showed that accuracy of estimates was high with 30 microsatellites, with the best results obtained for selfing rates below 0.6. In these conditions, the root mean square error (RMSE) between the true and estimated genealogical coancestry was small (<0.07), although the number of ancestors was overestimated and the selfing rate could be biased. Simulations also showed that linkage disequilibrium between markers and departure from the Hardy-Weinberg equilibrium in the founder population did not affect the efficiency of the method. Real oil palm data confirmed the simulation results, with a high correlation between the true and estimated genealogical coancestry (>0.9) and a low RMSE (<0.08) using 38 markers. The method was applied to the Deli oil palm population for which pedigree data were scarce. The estimated genealogical coancestries were highly correlated (>0.9) with the molecular coancestries using 100 markers. Reconstructed pedigrees were used to estimate effective population sizes. In conclusion, this method gave reliable genealogical coancestry estimates. The strategy was implemented in the software MOLCOANC 3.0.

  9. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  10. Realistic Goals and Processes for Future Space Astronomy Portfolio Planning

    NASA Astrophysics Data System (ADS)

    Morse, Jon

    2015-08-01

    It is generally recognized that international participation and coordination is highly valuable for maximizing the scientific impact of modern space science facilities, as well as for cost-sharing reasons. Indeed, all large space science missions, and most medium and small missions, are international, even if one country or space agency has a clear leadership role and bears most of the development costs. International coordination is a necessary aspect of future mission planning, but how that coordination is done remains debatable. I propose that the community's scientific vision is generally homogeneous enough to permit international coordination of decadal-scale strategic science goals. However, the timing and budget allocation/funding mechanisms of individual countries and/or space agencies are too disparate for effective long-term strategic portfolio planning via a single international process. Rather, I argue that coordinated space mission portfolio planning is a natural consequence of international collaboration on individual strategic missions. I review the process and outcomes of the U.S. 2010 decadal survey in astronomy & astrophysics from the perspective of a government official who helped craft the survey charter and transmitted guidance to the scientific community on behalf of a sponsoring agency (NASA), while continuing to manage the current portfolio that involved ongoing negotiations with other space agencies. I analyze the difficulties associated with projecting long-term budgets, obtaining realistic mission costs (including the additional cost burdens of international partnerships), and developing new (possibly transformational) technologies. Finally, I remark on the future role that privately funded space science missions can have in accomplishing international science community goals.

  11. Design principles and optimal performance for molecular motors under realistic constraints

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai; Cao, Yuansheng

    2018-02-01

    The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.

  12. A new framework for estimating return levels using regional frequency analysis

    NASA Astrophysics Data System (ADS)

    Winter, Hugo; Bernardara, Pietro; Clegg, Georgina

    2017-04-01

    We propose a new framework for incorporating more spatial and temporal information into the estimation of extreme return levels. Currently, most studies use extreme value models applied to data from a single site; an approach which is inefficient statistically and leads to return level estimates that are less physically realistic. We aim to highlight the benefits that could be obtained by using methodology based upon regional frequency analysis as opposed to classic single site extreme value analysis. This motivates a shift in thinking, which permits the evaluation of local and regional effects and makes use of the wide variety of data that are now available on high temporal and spatial resolutions. The recent winter storms over the UK during the winters of 2013-14 and 2015-16, which have caused wide-ranging disruption and damaged important infrastructure, provide the main motivation for the current work. One of the most impactful natural hazards is flooding, which is often initiated by extreme precipitation. In this presentation, we focus on extreme rainfall, but shall discuss other meteorological variables alongside potentially damaging hazard combinations. To understand the risks posed by extreme precipitation, we need reliable statistical models which can be used to estimate quantities such as the T-year return level, i.e. the level which is expected to be exceeded once every T-years. Extreme value theory provides the main collection of statistical models that can be used to estimate the risks posed by extreme precipitation events. Broadly, at a single site, a statistical model is fitted to exceedances of a high threshold and the model is used to extrapolate to levels beyond the range of the observed data. However, when we have data at many sites over a spatial domain, fitting a separate model for each separate site makes little sense and it would be better if we could incorporate all this information to improve the reliability of return level estimates. Here

  13. XCAT/DRASIM: a realistic CT/human-model simulation package

    NASA Astrophysics Data System (ADS)

    Fung, George S. K.; Stierstorfer, Karl; Segars, W. Paul; Taguchi, Katsuyuki; Flohr, Thomas G.; Tsui, Benjamin M. W.

    2011-03-01

    The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms, this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First, we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated simulation package by performing a number of sample simulations of multiple x-ray projections from different views followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods for improving CT image quality and reducing radiation dose.

  14. Dealing with Complex Causality in Realist Synthesis: The Promise of Qualitative Comparative Analysis

    ERIC Educational Resources Information Center

    Sager, Fritz; Andereggen, Celine

    2012-01-01

    In this article, the authors state two arguments: first, that the four categories of context, politics, polity, and policy make an adequate framework for systematic review being both exhaustive and parsimonious; second, that the method of qualitative comparative analysis (QCA) is an appropriate methodical approach for gaining realistic results…

  15. Group Augmentation in Realistic Visual-Search Decisions via a Hybrid Brain-Computer Interface.

    PubMed

    Valeriani, Davide; Cinel, Caterina; Poli, Riccardo

    2017-08-10

    Groups have increased sensing and cognition capabilities that typically allow them to make better decisions. However, factors such as communication biases and time constraints can lead to less-than-optimal group decisions. In this study, we use a hybrid Brain-Computer Interface (hBCI) to improve the performance of groups undertaking a realistic visual-search task. Our hBCI extracts neural information from EEG signals and combines it with response times to build an estimate of the decision confidence. This is used to weigh individual responses, resulting in improved group decisions. We compare the performance of hBCI-assisted groups with the performance of non-BCI groups using standard majority voting, and non-BCI groups using weighted voting based on reported decision confidence. We also investigate the impact on group performance of a computer-mediated form of communication between members. Results across three experiments suggest that the hBCI provides significant advantages over non-BCI decision methods in all cases. We also found that our form of communication increases individual error rates by almost 50% compared to non-communicating observers, which also results in worse group performance. Communication also makes reported confidence uncorrelated with the decision correctness, thereby nullifying its value in weighing votes. In summary, best decisions are achieved by hBCI-assisted, non-communicating groups.

  16. Interactions Between Energetic Electrons and Realistic Whistler Mode Waves in the Jovian Magnetosphere

    NASA Astrophysics Data System (ADS)

    de Soria-Santacruz Pich, M.; Drozdov, A.; Menietti, J. D.; Garrett, H. B.; Kellerman, A. C.; Shprits, Y. Y.

    2016-12-01

    The radiation belts of Jupiter are the most intense of all the planets in the solar system. Their source is not well understood but they are believed to be the result of inward radial transport beyond the orbit of Io. In the case of Earth, the radiation belts are the result of local acceleration and radial diffusion from whistler waves, and it has been suggested that this type of acceleration may also be significant in the magnetosphere of Jupiter. Multiple diffusion codes have been developed to study the dynamics of the Earth's magnetosphere and characterize the interaction between relativistic electrons and whistler waves; in the present paper we adapt one of these codes, the two-dimensional version of the Versatile Electron Radiation Belt (VERB) computer code, to the case of the Jovian magnetosphere. We use realistic parameters to determine the importance of whistler emissions in the acceleration and loss of electrons in the Jovian magnetosphere. More specifically, we use an extensive wave survey from the Galileo spacecraft and initial conditions derived from the Galileo Interim Radiation Electron Model version 2 (GIRE2) to estimate the pitch angle and energy diffusion of the electron population due to lower and upper band whistlers as a function of latitude and radial distance from the planet, and we calculate the decay rates that result from this interaction.

  17. Real-time state estimation in a flight simulator using fNIRS.

    PubMed

    Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic

    2015-01-01

    Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot's instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot's mental state matched significantly better than chance with the pilot's real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development.

  18. Real-Time State Estimation in a Flight Simulator Using fNIRS

    PubMed Central

    Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic

    2015-01-01

    Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot’s instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot’s mental state matched significantly better than chance with the pilot’s real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development. PMID:25816347

  19. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    PubMed

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  20. A Role for MST Neurons in Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Perrone, J. A.

    1994-01-01

    A template model of human visual self-motion perception, which uses neurophysiologically realistic "heading detectors", is consistent with numerous human psychophysical results including the failure of humans to estimate their heading (direction of forward translation) accurately under certain visual conditions. We tested the model detectors with stimuli used by others in single-unit studies. The detectors showed emergent properties similar to those of MST neurons: (1) Sensitivity to non-preferred flow; Each detector is tuned to a specific combination of flow components and its response is systematically reduced by the addition of nonpreferred flow, and (2) Position invariance; The detectors maintain their apparent preference for particular flow components over large regions of their receptive fields. It has been argued that this latter property is incompatible with MST playing a role in heading perception. The model however demonstrates how neurons with the above response properties could still support accurate heading estimation within extrastriate cortical maps.

  1. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking

  2. Combining Campbell Standards and the Realist Evaluation Approach: The Best of Two Worlds?

    ERIC Educational Resources Information Center

    van der Knaap, Leontien M.; Leeuw, Frans L.; Bogaerts, Stefan; Nijssen, Laura T. J.

    2008-01-01

    This article presents an approach to systematic reviews that combines the Campbell Collaboration Crime and Justice standards and the realist notion of contexts-mechanisms-outcomes (CMO) configurations. Both approaches have their advantages and drawbacks, and the authors will make a case for combining both approaches to profit from their advantages…

  3. Realist Evaluation in Wraparound: A New Approach in Social Work Evidence-Based Practice

    ERIC Educational Resources Information Center

    Kazi, Mansoor A. F.; Pagkos, Brian; Milch, Heidi A.

    2011-01-01

    Objectives: The purpose of this study was to develop a realist evaluation paradigm in social work evidence-based practice. Method: Wraparound (at Gateway-Longview Inc., New York) used a reliable outcome measure and an electronic database to systematically collect and analyze data on the interventions, the client demographics and circumstances, and…

  4. Multimodal person authentication on a smartphone under realistic conditions

    NASA Astrophysics Data System (ADS)

    Morris, Andrew C.; Jassim, Sabah; Sellahewa, Harin; Allano, Lorene; Ehlers, Johan; Wu, Dalei; Koreman, Jacques; Garcia-Salicetti, Sonia; Ly-Van, Bao; Dorizzi, Bernadette

    2006-05-01

    Verification of a person's identity by the combination of more than one biometric trait strongly increases the robustness of person authentication in real applications. This is particularly the case in applications involving signals of degraded quality, as for person authentication on mobile platforms. The context of mobility generates degradations of input signals due to the variety of environments encountered (ambient noise, lighting variations, etc.), while the sensors' lower quality further contributes to decrease in system performance. Our aim in this work is to combine traits from the three biometric modalities of speech, face and handwritten signature in a concrete application, performing non intrusive biometric verification on a personal mobile device (smartphone/PDA). Most available biometric databases have been acquired in more or less controlled environments, which makes it difficult to predict performance in a real application. Our experiments are performed on a database acquired on a PDA as part of the SecurePhone project (IST-2002-506883 project "Secure Contracts Signed by Mobile Phone"). This database contains 60 virtual subjects balanced in gender and age. Virtual subjects are obtained by coupling audio-visual signals from real English speaking subjects with signatures from other subjects captured on the touch screen of the PDA. Video data for the PDA database was recorded in 2 recording sessions separated by at least one week. Each session comprises 4 acquisition conditions: 2 indoor and 2 outdoor recordings (with in each case, a good and a degraded quality recording). Handwritten signatures were captured in one session in realistic conditions. Different scenarios of matching between training and test conditions are tested to measure the resistance of various fusion systems to different types of variability and different amounts of enrolment data.

  5. Mechanical stabilization of the Levitron's realistic model

    NASA Astrophysics Data System (ADS)

    Olvera, Arturo; De la Rosa, Abraham; Giordano, Claudia M.

    2016-11-01

    The stability of the magnetic levitation showed by the Levitron was studied by M.V. Berry as a six degrees of freedom Hamiltonian system using an adiabatic approximation. Further, H.R. Dullin found critical spin rate bounds where the levitation persists and R.F. Gans et al. offered numerical results regarding the initial conditions' manifold where this occurs. In the line of this series of works, first, we extend the equations of motion to include dissipation for a more realistic model, and then introduce a mechanical forcing to inject energy into the system in order to prevent the Levitron from falling. A systematic study of the flying time as a function of the forcing parameters is carried out which yields detailed bifurcation diagrams showing an Arnold's tongues structure. The stability of these solutions were studied with the help of a novel method to compute the maximum Lyapunov exponent called MEGNO. The bifurcation diagrams for MEGNO reproduce the same Arnold's tongue structure.

  6. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  7. Spatial design and strength of spatial signal: Effects on covariance estimation

    USGS Publications Warehouse

    Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.

    2007-01-01

    In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.

  8. Lean and leadership practices: development of an initial realist program theory.

    PubMed

    Goodridge, Donna; Westhorp, Gill; Rotter, Thomas; Dobson, Roy; Bath, Brenna

    2015-09-07

    Lean as a management system has been increasingly adopted in health care settings in an effort to enhance quality, capacity and safety, while simultaneously containing or reducing costs. The Ministry of Health in the province of Saskatchewan, Canada has made a multi-million dollar investment in Lean initiatives to create "better health, better value, better care, and better teams", affording a unique opportunity to advance our understanding of the way in which Lean philosophy, principles and tools work in health care. In order to address the questions, "What changes in leadership practices are associated with the implementation of Lean?" and "When leadership practices change, how do the changed practices contribute to subsequent outcomes?", we used a qualitative, multi-stage approach to work towards developing an initial realist program theory. We describe the implications of realist assumptions for evaluation of this Lean initiative. Formal theories including Normalization Process Theory, Theories of Double Loop and Organization Leaning and the Theory of Cognitive Dissonance help understand this initial rough program theory. Data collection included: key informant consultation; a stakeholder workshop; documentary review; 26 audiotaped and transcribed interviews with health region personnel; and team discussions. A set of seven initial hypotheses regarding the manner in which Lean changes leadership practices were developed from our data. We hypothesized that Lean, as implemented in this particular setting, changes leadership practices in the following ways. Lean: a) aligns the aims and objectives of health regions; b) authorizes attention and resources to quality improvement and change management c) provides an integrated set of tools for particular tasks; d) changes leaders' attitudes or beliefs about appropriate leadership and management styles and behaviors; e) demands increased levels of expertise, accountability and commitment from leaders; f) measures and

  9. Porosity estimation by semi-supervised learning with sparsely available labeled samples

    NASA Astrophysics Data System (ADS)

    Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi

    2017-09-01

    This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.

  10. Statistical multi-path exposure method for assessing the whole-body SAR in a heterogeneous human body model in a realistic environment.

    PubMed

    Vermeeren, Günter; Joseph, Wout; Martens, Luc

    2013-04-01

    Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models. Copyright © 2012 Wiley Periodicals, Inc.

  11. A feasibility test to estimate the duration of phytoextraction of heavy metals from polluted soils.

    PubMed

    Japenga, J; Koopmans, G F; Song, J; Römkens, P F A M

    2007-01-01

    The practical applicability of heavy metal (HM) phytoextraction depends heavily on its duration. Phytoextraction duration is the main cost factorfor phytoextraction, both referring to recurring economic costs during phytoextraction and to the cost of the soil having no economic value during phytoextraction. An experiment is described here, which is meant as a preliminary feasibility test before starting a phytoextraction scheme in practice, to obtain a more realistic estimate of the phytoextraction duration of a specific HM-polluted soil. In the experiment, HM-polluted soil is mixed at different ratios with unpolluted soil of comparable composition to mimic the gradual decrease of the HM content in the target HM-polluted soil during phytoextraction. After equilibrating the soil mixtures, one cropping cycle is carried out with the plant species of interest. At harvest, the adsorbed HM contents in the soil and the HM contents in the plant shoots are determined. The adsorbed HM contents in the soil are then related to the HM contents in the plant shoots by a log-log linear relationship that can then be used to estimate the phytoextraction duration of a specific HM-polluted soil. This article describes and evaluates the merits of such a feasibility experiment. Potential drawbacks regarding the accuracy of the described approach are discussed and a greenhouse-field extrapolation procedure is proposed.

  12. Transport properties in a monolayer graphene modulated by the realistic magnetic field and the Schottky metal stripe

    NASA Astrophysics Data System (ADS)

    Lu, Jian-Duo; Li, Yun-Bao; Liu, Hong-Yu; Peng, Shun-Jin; Zhao, Fei-Xiang

    2016-09-01

    Based on the transfer-matrix method, a systematic investigation of electron transport properties is done in a monolayer graphene modulated by the realistic magnetic field and the Schottky metal stripe. The strong dependence of the electron transmission and the conductance on the incident angle of carriers is clearly seen. The height, position as well as width of the barrier also play an important role on the electron transport properties. These interesting results are very useful for understanding the tunneling mechanism in the monolayer graphene and helpful for designing the graphene-based electrical device modulated by the realistic magnetic field and the electrical barrier.

  13. Assessing Outcomes of a Realistic Major Preview in an Introductory Sport Management Course

    ERIC Educational Resources Information Center

    Pierce, David; Wanless, Elizabeth; Johnson, James

    2014-01-01

    This paper assessed the outcomes of a field experience assignment (FEA) in an introductory sport management course designed as a realistic major preview. Student learning outcomes assessed were commitment to the major, intent to pursue the major, expectation of a career in sports, and perceived preparation for a career in sports. A…

  14. A realist evaluation of community-based participatory research: partnership synergy, trust building and related ripple effects.

    PubMed

    Jagosh, Justin; Bush, Paula L; Salsberg, Jon; Macaulay, Ann C; Greenhalgh, Trish; Wong, Geoff; Cargo, Margaret; Green, Lawrence W; Herbert, Carol P; Pluye, Pierre

    2015-07-30

    Community-Based Participatory Research (CBPR) is an approach in which researchers and community stakeholders form equitable partnerships to tackle issues related to community health improvement and knowledge production. Our 2012 realist review of CBPR outcomes reported long-term effects that were touched upon but not fully explained in the retained literature. To further explore such effects, interviews were conducted with academic and community partners of partnerships retained in the review. Realist methodology was used to increase the understanding of what supports partnership synergy in successful long-term CBPR partnerships, and to further document how equitable partnerships can result in numerous benefits including the sustainability of relationships, research and solutions. Building on our previous realist review of CBPR, we contacted the authors of longitudinal studies of academic-community partnerships retained in the review. Twenty-four participants (community members and researchers) from 11 partnerships were interviewed. Realist logic of analysis was used, involving middle-range theory, context-mechanism-outcome configuration (CMOcs) and the concept of the 'ripple effect'. The analysis supports the central importance of developing and strengthening partnership synergy through trust. The ripple effect concept in conjunction with CMOcs showed that a sense of trust amongst CBPR members was a prominent mechanism leading to partnership sustainability. This in turn resulted in population-level outcomes including: (a) sustaining collaborative efforts toward health improvement; (b) generating spin-off projects; and (c) achieving systemic transformations. These results add to other studies on improving the science of CBPR in partnerships with a high level of power-sharing and co-governance. Our results suggest sustaining CBPR and achieving unanticipated benefits likely depend on trust-related mechanisms and a continuing commitment to power-sharing. These

  15. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  16. Realistic absorption coefficient of ultrathin films

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2012-10-01

    Both a theoretical algorithm and an experimental procedure are discussed of a new route to determine the absorption/scattering properties of thin films deposited on transparent substrates. Notably, the non-measurable contribution of the film-substrate interface is inherently accounted for. While the experimental procedure exploits only measurable spectra combined according to a very simple algorithm, the theoretical derivation does not require numerical handling of the acquired spectra or any assumption on the film homogeneity and substrate thickness. The film absorption response is estimated by subtracting the measured absorption spectrum of the bare substrate from that of the film on the substrate structure but in a non-straightforward way. In fact, an assumption about the absorption profile of the overall structure is introduced and a corrective factor accounting for the relative film-to-substrate thickness. The method is tested on films of a well known material (ITO) as a function of the film structural quality and influence of the film-substrate interface, both deliberately changed by thickness tuning and doping. Results are found fully consistent with information obtained by standard optical analysis and band gap values reported in the literature. Additionally, comparison with a conventional method demonstrates that our route is generally more accurate even if particularly suited for very thin films.

  17. Estimating Building Age with 3d GIS

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Sindram, M.

    2017-10-01

    Building datasets (e.g. footprints in OpenStreetMap and 3D city models) are becoming increasingly available worldwide. However, the thematic (attribute) aspect is not always given attention, as many of such datasets are lacking in completeness of attributes. A prominent attribute of buildings is the year of construction, which is useful for some applications, but its availability may be scarce. This paper explores the potential of estimating the year of construction (or age) of buildings from other attributes using random forest regression. The developed method has a two-fold benefit: enriching datasets and quality control (verification of existing attributes). Experiments are carried out on a semantically rich LOD1 dataset of Rotterdam in the Netherlands using 9 attributes. The results are mixed: the accuracy in the estimation of building age depends on the available information used in the regression model. In the best scenario we have achieved predictions with an RMSE of 11 years, but in more realistic situations with limited knowledge about buildings the error is much larger (RMSE = 26 years). Hence the main conclusion of the paper is that inferring building age with 3D city models is possible to a certain extent because it reveals the approximate period of construction, but precise estimations remain a difficult task.

  18. Transforming the patient care environment with Lean Six Sigma and realistic evaluation.

    PubMed

    Black, Jason

    2009-01-01

    Lean Six Sigma (LSS) is a structured methodology for transforming processes, but it does not fully consider the complex social interactions that cause processes to form in hospital organizations. By combining LSS implementations with the concept of Realistic Evaluation, a methodology that promotes change by assessing and considering the individual characteristics of an organization's social environment, successful and sustainable process improvement is more likely.

  19. A Critical Approach to School Mathematical Knowledge: The Case of "Realistic" Problems in Greek Primary School Textbooks for Seven-Year-Old Pupils

    ERIC Educational Resources Information Center

    Zacharos, Konstantinos; Koustourakis, Gerassimos

    2011-01-01

    The reference contexts that accompany the "realistic" problems chosen for teaching mathematical concepts in the first school grades play a major educational role. However, choosing "realistic" problems in teaching is a complex process that must take into account various pedagogical, sociological and psychological parameters.…

  20. Precision and accuracy of age estimates obtained from anal fin spines, dorsal fin spines, and sagittal otoliths for known-age largemouth bass

    USGS Publications Warehouse

    Klein, Zachary B.; Bonvechio, Timothy F.; Bowen, Bryant R.; Quist, Michael C.

    2017-01-01

    Sagittal otoliths are the preferred aging structure for Micropterus spp. (black basses) in North America because of the accurate and precise results produced. Typically, fisheries managers are hesitant to use lethal aging techniques (e.g., otoliths) to age rare species, trophy-size fish, or when sampling in small impoundments where populations are small. Therefore, we sought to evaluate the precision and accuracy of 2 non-lethal aging structures (i.e., anal fin spines, dorsal fin spines) in comparison to that of sagittal otoliths from known-age Micropterus salmoides (Largemouth Bass; n = 87) collected from the Ocmulgee Public Fishing Area, GA. Sagittal otoliths exhibited the highest concordance with true ages of all structures evaluated (coefficient of variation = 1.2; percent agreement = 91.9). Similarly, the low coefficient of variation (0.0) and high between-reader agreement (100%) indicate that age estimates obtained from sagittal otoliths were the most precise. Relatively high agreement between readers for anal fin spines (84%) and dorsal fin spines (81%) suggested the structures were relatively precise. However, age estimates from anal fin spines and dorsal fin spines exhibited low concordance with true ages. Although use of sagittal otoliths is a lethal technique, this method will likely remain the standard for aging Largemouth Bass and other similar black bass species.