Sample records for good initial estimates

  1. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  2. ROI Analysis of the System Architecture Virtual Integration Initiative

    DTIC Science & Technology

    2018-04-01

    The ROI anal- ysis uses conservative estimates of costs and benefits, especially for those parameters that have a proven, strong correlation to overall...formula: • In Section 3, we discuss the exponential growth of avionics software systems in terms of SLOC by analyzing the historical data to correlate ...which implies that the system has good structure (high cohesion, low coupling), good ap- plication clarity (good correlation between program and

  3. On curve and surface stretching in turbulent flow

    NASA Technical Reports Server (NTRS)

    Etemadi, Nassrollah

    1989-01-01

    Cocke (1969) proved that in incompressible, isotropic turbulence the average material line (material surface) elements increase in comparison with their initial values. Good estimates of how much they increase in terms of the eigenvalues of the Green deformation tensor were rigorously obtained.

  4. The MAP Spacecraft Angular State Estimation After Sensor Failure

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.

  5. The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.

  6. Levels-of-growing-stock cooperative study in Douglas-fir: report no. 09—Some comparisons of DFSIM estimates with growth in the levels-of-growing-stock study.

    Treesearch

    Robert O. Curtis

    1987-01-01

    Initial stand statistics for the levels-of-growing-stock study installations were projected by the Douglas-fir stand simulation program (DFSIM) over the available periods of observation. Estimates were compared with observed volume and basal area growth, diameter change, and mortality. Overall agreement was reasonably good, although results indicate some biases and a...

  7. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  8. Reciprocal Sliding Friction Model for an Electro-Deposited Coating and Its Parameter Estimation Using Markov Chain Monte Carlo Method

    PubMed Central

    Kim, Kyungmok; Lee, Jaewook

    2016-01-01

    This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359

  9. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  10. Hyperspectral data discrimination methods

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Chen, Xuewen

    2000-12-01

    Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.

  11. A novel fluorescence microscopy approach to estimate quality loss of stored fruit fillings as a result of browning.

    PubMed

    Cropotova, Janna; Tylewicz, Urszula; Cocci, Emiliano; Romani, Santina; Dalla Rosa, Marco

    2016-03-01

    The aim of the present study was to estimate the quality deterioration of apple fillings during storage. Moreover, a potentiality of novel time-saving and non-invasive method based on fluorescence microscopy for prompt ascertainment of non-enzymatic browning initiation in fruit fillings was investigated. Apple filling samples were obtained by mixing different quantities of fruit and stabilizing agents (inulin, pectin and gellan gum), thermally processed and stored for 6-month. The preservation of antioxidant capacity (determined by DPPH method) in apple fillings was indirectly correlated with decrease in total polyphenols content that varied from 34±22 to 56±17% and concomitant accumulation of 5-hydroxymethylfurfural (HMF), ranging from 3.4±0.1 to 8±1mg/kg in comparison to initial apple puree values. The mean intensity of the fluorescence emission spectra of apple filling samples and initial apple puree was highly correlated (R(2)>0.95) with the HMF content, showing a good potentiality of fluorescence microscopy method to estimate non-enzymatic browning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  13. Identification and characterization of kidney transplants with good glomerular filtration rate at 1 year but subsequent progressive loss of renal function.

    PubMed

    Park, Walter D; Larson, Timothy S; Griffin, Matthew D; Stegall, Mark D

    2012-11-15

    After the first year after kidney transplantation, 3% to 5% of grafts fail each year but detailed studies of how grafts progress to failure are lacking. This study aimed to analyze the functional stability of kidney transplants between 1 and 5 years after transplantation and to identify initially well-functioning grafts with progressive decline in allograft function. The study included 788 adult conventional kidney transplants performed at the Mayo Clinic Rochester between January 2000 and December 2005 with a minimum graft survival and follow-up of 2.6 years. The modification of diet in renal disease equation for estimating glomerular filtration rate (eGFR(MDRD)) was used to calculate the slope of renal function over time using all available serum creatinine values between 1 and 5 years after transplantation. Most transplants demonstrated good function (eGFR(MDRD) ≥40 mL/min) at 1 year with positive eGFR(MDRD) slope between 1 and 5 years after transplantation. However, a subset of grafts with 1-year eGFR(MDRD) ≥40 mL/min exhibited strongly negative eGFR(MDRD) slope between 1 and 5 years suggestive of progressive loss of graft function. Forty-one percent of this subset reached graft failure during follow-up, accounting for 69% of allograft failures occurring after 2.5 years after transplantation. This pattern of progressive decline in estimated glomerular filtration rate despite good early function was associated with but not fully attributable to factors suggestive of enhanced antidonor immunity. Longitudinal analysis of serial estimated glomerular filtration ratemeasurements identifies initially well-functioning kidney transplants at high risk for subsequent graft loss. For this subset, further studies are needed to identify modifiable causes of functional decline.

  14. An approach for estimating item sensitivity to within-person change over time: An illustration using the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog).

    PubMed

    Dowling, N Maritza; Bolt, Daniel M; Deng, Sien

    2016-12-01

    When assessments are primarily used to measure change over time, it is important to evaluate items according to their sensitivity to change, specifically. Items that demonstrate good sensitivity to between-person differences at baseline may not show good sensitivity to change over time, and vice versa. In this study, we applied a longitudinal factor model of change to a widely used cognitive test designed to assess global cognitive status in dementia, and contrasted the relative sensitivity of items to change. Statistically nested models were estimated introducing distinct latent factors related to initial status differences between test-takers and within-person latent change across successive time points of measurement. Models were estimated using all available longitudinal item-level data from the Alzheimer's Disease Assessment Scale-Cognitive subscale, including participants representing the full-spectrum of disease status who were enrolled in the multisite Alzheimer's Disease Neuroimaging Initiative. Five of the 13 Alzheimer's Disease Assessment Scale-Cognitive items demonstrated noticeably higher loadings with respect to sensitivity to change. Attending to performance change on only these 5 items yielded a clearer picture of cognitive decline more consistent with theoretical expectations in comparison to the full 13-item scale. Items that show good psychometric properties in cross-sectional studies are not necessarily the best items at measuring change over time, such as cognitive decline. Applications of the methodological approach described and illustrated in this study can advance our understanding regarding the types of items that best detect fine-grained early pathological changes in cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Adults with an epilepsy history fare significantly worse on positive mental and physical health than adults with other common chronic conditions-Estimates from the 2010 National Health Interview Survey and Patient Reported Outcome Measurement System (PROMIS) Global Health Scale.

    PubMed

    Kobau, Rosemarie; Cui, Wanjun; Zack, Matthew M

    2017-07-01

    Healthy People 2020, a national health promotion initiative, calls for increasing the proportion of U.S. adults who self-report good or better health. The Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health Scale (GHS) was identified as a reliable and valid set of items of self-reported physical and mental health to monitor these two domains across the decade. The purpose of this study was to examine the percentage of adults with an epilepsy history who met the Healthy People 2020 target for self-reported good or better health and to compare these percentages to adults with history of other common chronic conditions. Using the 2010 National Health Interview Survey, we compared and estimated the age-standardized prevalence of reporting good or better physical and mental health among adults with five selected chronic conditions including epilepsy, diabetes, heart disease, cancer, and hypertension. We examined response patterns for physical and mental health scale among adults with these five conditions. The percentages of adults with epilepsy who reported good or better physical health (52%) or mental health (54%) were significantly below the Healthy People 2020 target estimate of 80% for both outcomes. Significantly smaller percentages of adults with an epilepsy history reported good or better physical health than adults with heart disease, cancer, or hypertension. Significantly smaller percentages of adults with an epilepsy history reported good or better mental health than adults with all other four conditions. Health and social service providers can implement and enhance existing evidence-based clinical interventions and public health programs and strategies shown to improve outcomes in epilepsy. These estimates can be used to assess improvements in the Healthy People 2020 Health-Related Quality of Life and Well-Being Objective throughout the decade. Published by Elsevier Inc.

  16. Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Weaver, Aaron S.

    2003-01-01

    Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.

  17. Image registration based on subpixel localization and Cauchy-Schwarz divergence

    NASA Astrophysics Data System (ADS)

    Ge, Yongxin; Yang, Dan; Zhang, Xiaohong; Lu, Jiwen

    2010-07-01

    We define a new matching metric-corner Cauchy-Schwarz divergence (CCSD) and present a new approach based on the proposed CCSD and subpixel localization for image registration. First, we detect the corners in an image by a multiscale Harris operator and take them as initial interest points. And then, a subpixel localization technique is applied to determine the locations of the corners and eliminate the false and unstable corners. After that, CCSD is defined to obtain the initial matching corners. Finally, we use random sample consensus to robustly estimate the parameters based on the initial matching. The experimental results demonstrate that the proposed algorithm has a good performance in terms of both accuracy and efficiency.

  18. Cosmological perturbation theory and the spherical collapse model - II. Non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Gaztanaga, Enrique; Fosalba, Pablo

    1998-12-01

    In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.

  19. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 2: Structure Design for State Estimation with Secondary Measurements

    PubMed Central

    2017-01-01

    This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604

  20. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 2: Structure Design for State Estimation with Secondary Measurements.

    PubMed

    Porru, Marcella; Özkan, Leyla

    2017-08-30

    This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.

  1. Coordinates of features on the Galilean satellites

    NASA Technical Reports Server (NTRS)

    Davies, M. E.; Katayama, F. Y.

    1980-01-01

    The coordinate systems of each of the Galilean satellites are defined and coordinates of features seen in the Voyager pictures of these satellites are presented. The control nets of the satellites were computed by means of single block analytical triangulations. The normal equations were solved by the conjugate iterative method which is convenient and which converges rapidly as the initial estimates of the parameters are very good.

  2. Cost and cost-effectiveness of computerized vs. in-person motivational interventions in the criminal justice system.

    PubMed

    Cowell, Alexander J; Zarkin, Gary A; Wedehase, Brendan J; Lerch, Jennifer; Walters, Scott T; Taxman, Faye S

    2018-04-01

    Although substance use is common among probationers in the United States, treatment initiation remains an ongoing problem. Among the explanations for low treatment initiation are that probationers are insufficiently motivated to seek treatment, and that probation staff have insufficient training and resources to use evidence-based strategies such as motivational interviewing. A web-based intervention based on motivational enhancement principles may address some of the challenges of initiating treatment but has not been tested to date in probation settings. The current study evaluated the cost-effectiveness of a computerized intervention, Motivational Assessment Program to Initiate Treatment (MAPIT), relative to face-to-face Motivational Interviewing (MI) and supervision as usual (SAU), delivered at the outset of probation. The intervention took place in probation departments in two U.S. cities. The baseline sample comprised 316 participants (MAPIT = 104, MI = 103, and SAU = 109), 90% (n = 285) of whom completed the 6-month follow-up. Costs were estimated from study records and time logs kept by interventionists. The effectiveness outcome was self-reported initiation into any treatment (formal or informal) within 2 and 6 months of the baseline interview. The cost-effectiveness analysis involved assessing dominance and computing incremental cost-effectiveness ratios and cost-effectiveness acceptability curves. Implementation costs were used in the base case of the cost-effectiveness analysis, which excludes both a hypothetical license fee to recoup development costs and startup costs. An intent-to-treat approach was taken. MAPIT cost $79.37 per participant, which was ~$55 lower than the MI cost of $134.27 per participant. Appointment reminders comprised a large proportion of the cost of the MAPIT and MI intervention arms. In the base case, relative to SAU, MAPIT cost $6.70 per percentage point increase in the probability of initiating treatment. If a decision-maker is willing to pay $15 or more to improve the probability of initiating treatment by 1%, estimates suggest she can be 70% confident that MAPIT is good value relative to SAU at the 2-month follow-up and 90% confident that MAPIT is good value at the 6-month follow-up. Web-based MAPIT may be good value compared to in-person delivered alternatives. This conclusion is qualified because the results are not robust to narrowing the outcome to initiating formal treatment only. Further work should explore ways to improve access to efficacious treatment in probation settings. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. MAPPING INDUCED POLARIZATION WITH NATURAL ELECTROMAGNETIC FIELDS FOR EXPLORATION AND RESOURCES CHARACTERIZATION BY THE MINING INDUSTRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edward Nichols

    2002-05-03

    In this quarter we continued the processing of the Safford IP survey data. The processing identified a time shift problem between the sites that was caused by a GPS firmware error. A software procedure was developed to identify and correct the shift, and this was applied to the data. Preliminary estimates were made of the remote referenced MT parameters, and initial data quality assessment showed the data quality was good for most of the line. The multi-site robust processing code of Egbert was linked to the new data and processing initiated.

  4. State of Charge estimation of lithium ion battery based on extended Kalman filtering algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Feng, Yiming; Pan, Binbiao; Wan, Renzhuo; Wang, Jun

    2017-08-01

    Accurate estimation of state-of-charge (SOC) for lithium ion battery is crucial for real-time diagnosis and prognosis in green energy vehicles. In this paper, a state space model of the battery based on Thevenin model is adopted. The strategy of estimating state of charge (SOC) based on extended Kalman fil-ter is presented, as well as to combine with ampere-hour counting (AH) and open circuit voltage (OCV) methods. The comparison between simulation and experiments indicates that the model’s performance matches well with that of lithium ion battery. The algorithm of extended Kalman filter keeps a good accura-cy precision and less dependent on its initial value in full range of SOC, which is proved to be suitable for online SOC estimation.

  5. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  6. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  7. Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water

    PubMed Central

    Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong

    2017-01-01

    This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442

  8. Feed Safe: a multidisciplinary partnership approach results in a successful mobile application for breastfeeding mothers.

    PubMed

    White, Becky; White, James; Giglia, Roslyn; Tawia, Susan

    2016-05-30

    Issue addressed: Mobile applications are increasingly being used in health promotion initiatives. Although there is evidence that developing these mobile health applications in multidisciplinary teams is good practice, there is a gap in the literature with respect to evaluation of the process of this partnership model and how best to disseminate the application into the community. The aim of this paper is twofold, to describe the partnership model in which the Feed Safe application was developed and to investigate what worked in terms of dissemination. Methods: The process of working in partnership was measured using the VicHealth partnership analysis tool for health promotion. The dissemination strategy and reach of the application was measured using both automated analytics data and estimates of community-initiated promotion. Results: The combined average score from the partnership analysis tool was 138 out of a possible 175. A multipronged dissemination strategy led to good uptake of the application among Australian women. Conclusions: Multidisciplinary partnership models are important in the development of health promotion mobile applications. Recognising and utilising the skills of each partner organisation can help expand the reach of mobile health applications into the Australian population and aid in good uptake of health promotion resources. So what?: Developing mobile applications in multidisciplinary partnerships is good practice and can lead to wide community uptake of the health promotion resource.

  9. Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study

    NASA Astrophysics Data System (ADS)

    Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom

    2018-02-01

    This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.

  10. Flexural testing on carbon fibre laminates taking into account their different behaviour under tension and compression

    NASA Astrophysics Data System (ADS)

    Serna Moreno, M. C.; Romero Gutierrez, A.; Martínez Vicente, J. L.

    2016-07-01

    An analytical model has been derived for describing the results of three-point-bending tests in materials with different behaviour under tension and compression. The shift of the neutral plane and the damage initiation mode and its location have been defined. The validity of the equations has been reviewed by testing carbon fibre-reinforced polymers (CFRP), typically employed in different weight-critical applications. Both unidirectional and cross-ply laminates have been studied. The initial failure mode produced depends directly on the beam span- thickness relation. Therefore, specimens with different thicknesses have been analysed for examining the damage initiation due to either the bending moment or the out-of-plane shear load. The experimental description of the damage initiation and evolution has been shown by means of optical microscopy. The good agreement between the analytical estimations and the experimental results shows the validity of the analytical model exposed.

  11. Simple estimation of linear 1+1 D tsunami run-up

    NASA Astrophysics Data System (ADS)

    Fuentes, M.; Campos, J. A.; Riquelme, S.

    2016-12-01

    An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.

  12. Maximum current density and beam brightness achievable by laser-driven electron sources

    NASA Astrophysics Data System (ADS)

    Filippetto, D.; Musumeci, P.; Zolotorev, M.; Stupakov, G.

    2014-02-01

    This paper discusses the extension to different electron beam aspect ratio of the Child-Langmuir law for the maximum achievable current density in electron guns. Using a simple model, we derive quantitative formulas in good agreement with simulation codes. The new scaling laws for the peak current density of temporally long and transversely narrow initial beam distributions can be used to estimate the maximum beam brightness and suggest new paths for injector optimization.

  13. FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems

    NASA Astrophysics Data System (ADS)

    Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.

    2016-12-01

    Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.

  14. Development of water movement model as a module of moisture content simulation in static pile composting.

    PubMed

    Seng, Bunrith; Kaneko, Hidehiro; Hirayama, Kimiaki; Katayama-Hirayama, Keiko

    2012-01-01

    This paper presents a mathematical model of vertical water movement and a performance evaluation of the model in static pile composting operated with neither air supply nor turning. The vertical moisture content (MC) model was developed with consideration of evaporation (internal and external evaporation), diffusion (liquid and vapour diffusion) and percolation, whereas additional water from substrate decomposition and irrigation was not taken into account. The evaporation term in the model was established on the basis of reference evaporation of the materials at known temperature, MC and relative humidity of the air. Diffusion of water vapour was estimated as functions of relative humidity and temperature, whereas diffusion of liquid water was empirically obtained from experiment by adopting Fick's law. Percolation was estimated by following Darcy's law. The model was applied to a column of composting wood chips with an initial MC of 60%. The simulation program was run for four weeks with calculation span of 1 s. The simulated results were in reasonably good agreement with the experimental results. Only a top layer (less than 20 cm) had a considerable MC reduction; the deeper layers were comparable to the initial MC, and the bottom layer was higher than the initial MC. This model is a useful tool to estimate the MC profile throughout the composting period, and could be incorporated into biodegradation kinetic simulation of composting.

  15. Too Much of a Good Thing? Exploring the Impact of Wealth on Weight.

    PubMed

    Au, Nicole; Johnston, David W

    2015-11-01

    Obesity, like many health conditions, is more prevalent among the socioeconomically disadvantaged. In our data, very poor women are three times more likely to be obese and five times more likely to be severely obese than rich women. Despite this strong correlation, it remains unclear whether higher wealth causes lower obesity. In this paper, we use nationally representative panel data and exogenous wealth shocks (primarily inheritances and lottery wins) to shed light on this issue. Our estimates show that wealth improvements increase weight for women, but not men. This effect differs by initial wealth and weight-an average-sized wealth shock received by initially poor and obese women is estimated to increase weight by almost 10 lb. Importantly, for some females, the effects appear permanent. We also find that a change in diet is the most likely explanation for the weight gain. Overall, the results suggest that additional wealth may exacerbate rather than alleviate weight problems. Copyright © 2014 John Wiley & Sons, Ltd.

  16. The initiation of boiling during pressure transients. [water boiling on metal surfaces

    NASA Technical Reports Server (NTRS)

    Weisman, J.; Bussell, G.; Jashnani, I. L.; Hsieh, T.

    1973-01-01

    The initiation of boiling of water on metal surfaces during pressure transients has been investigated. The data were obtained by a new technique in which light beam fluctuations and a pressure signal were simultaneously recorded on a dual beam oscilloscope. The results obtained agreed with those obtained using high speed photography. It was found that, for water temperatures between 90-150 C, the wall superheat required to initiate boiling during a rapid pressure transient was significantly higher than required when the pressure was slowly reduced. This result is explained by assuming that a finite time is necessary for vapor to fill the cavity at which the bubble originates. Experimental measurements of this time are in reasonably good agreement with calculations based on the proposed theory. The theory includes a new procedure for estimating the coefficient of vaporization.

  17. Contrast-enhanced spectral mammography versus MRI: Initial results in the detection of breast cancer and assessment of tumour size.

    PubMed

    Fallenberg, E M; Dromain, C; Diekmann, F; Engelken, F; Krohn, M; Singh, J M; Ingold-Heppner, B; Winzer, K J; Bick, U; Renz, D M

    2014-01-01

    To compare mammography (MG), contrast-enhanced spectral mammography (CESM), and magnetic resonance imaging (MRI) in the detection and size estimation of histologically proven breast cancers using postoperative histology as the gold standard. After ethical approval, 80 women with newly diagnosed breast cancer underwent MG, CESM, and MRI examinations. CESM was reviewed by an independent experienced radiologist, and the maximum dimension of suspicious lesions was measured. For MG and MRI, routine clinical reports of breast specialists, with judgment based on the BI-RADS lexicon, were used. Results of each imaging technique were correlated to define the index cancer. Fifty-nine cases could be compared to postoperative histology for size estimation. Breast cancer was visible in 66/80 MG, 80/80 CESM, and 77/79 MRI examinations. Average lesion largest dimension was 27.31 mm (SD 22.18) in MG, 31.62 mm (SD 24.41) in CESM, and 27.72 mm (SD 21.51) in MRI versus 32.51 mm (SD 29.03) in postoperative histology. No significant difference was found between lesion size measurement on MRI and CESM compared with histopathology. Our initial results show a better sensitivity of CESM and MRI in breast cancer detection than MG and a good correlation with postoperative histology in size assessment. • Contrast-enhanced spectral mammography (CESM) is slowly being introduced into clinical practice. • Access to breast MRI is limited by availability and lack of reimbursement. • Initial results show a better sensitivity of CESM and MRI than conventional mammography. • CESM showed a good correlation with postoperative histology in size assessment. • Contrast-enhanced spectral mammography offers promise, seemingly providing information comparable to MRI.

  18. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  19. 12 CFR 1024.7 - Good faith estimate.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 8 2014-01-01 2014-01-01 false Good faith estimate. 1024.7 Section 1024.7... (REGULATION X) Mortgage Settlement and Escrow Accounts § 1024.7 Good faith estimate. (a) Lender to provide. (1..., 2014. For the convenience of the user, the revised text is set forth as follows: § 1024.7 Good faith...

  20. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  1. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed Central

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-01-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974

  2. Bromate formation in a hybrid ozonation-ceramic membrane filtration system.

    PubMed

    Moslemi, Mohammadreza; Davies, Simon H; Masten, Susan J

    2011-11-01

    The effect of pH, ozone mass injection rate, initial bromide concentration, and membrane molecular weight cut off (MWCO) on bromate formation in a hybrid membrane filtration-ozonation reactor was studied. Decreasing the pH, significantly reduced bromate formation. Bromate formation increased with increasing gaseous ozone mass injection rate, due to increase in dissolved ozone concentrations. Greater initial bromide concentrations resulted in higher bromate concentrations. An increase in the bromate concentration was observed by reducing MWCO, which resulted in a concomitant increase in the retention time in the system. A model to estimate the rate of bromate formation was developed. Good correlation between the model simulation and the experimental data was achieved. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A high-accuracy two-position alignment inertial navigation system for lunar rovers aided by a star sensor with a calibration and positioning function

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming

    2016-12-01

    An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.

  4. Thermal fatigue behaviour for a 316 L type steel

    NASA Astrophysics Data System (ADS)

    Fissolo, A.; Marini, B.; Nais, G.; Wident, P.

    1996-10-01

    This paper deals with initiation and growth of cracks produced by thermal fatigue loadings on 316 L steel, which is a reference material for the first wall of the next fusion reactor ITER. Two types of facilities have been built. As for true components, thermal cycles have been repeatedly applied on the surface of the specimen. The first is mainly concerned with initiation, which is detected with a light microscope. The second allows one to determine the propagation of a single crack. Crack initiation is analyzed using the French RCC-MR code procedure, and the strain-controlled isothermal fatigue curves. To predict crack growth, a model previously proposed by Haigh and Skelton is applied. This is based on determination of effective stress intensity factors, which takes into account both plastic strain and crack closure phenomena. It is shown that estimations obtained with such methodologies are in good agreement with experimental data.

  5. 12 CFR 1024.7 - Good faith estimate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Good faith estimate. 1024.7 Section 1024.7 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION REAL ESTATE SETTLEMENT PROCEDURES ACT (REGULATION X) § 1024.7 Good faith estimate. (a) Lender to provide. (1) Except as otherwise provided in...

  6. 12 CFR 1024.7 - Good faith estimate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Good faith estimate. 1024.7 Section 1024.7 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION REAL ESTATE SETTLEMENT PROCEDURES ACT (REGULATION X) § 1024.7 Good faith estimate. (a) Lender to provide. (1) Except as otherwise provided in...

  7. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 5 2013-04-01 2013-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  8. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  9. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 5 2014-04-01 2014-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  10. 24 CFR 3500.7 - Good faith estimate.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  11. Verification of the Velocity Structure in Mexico Basin Using the H/V Spectral Ratio of Microtremors

    NASA Astrophysics Data System (ADS)

    Matsushima, S.; Sanchez-Sesma, F. J.; Nagashima, F.; Kawase, H.

    2011-12-01

    The authors have been proposing a new theory to calculate the Horizontal-to-Vertical (H/V) spectral ratio of microtremors assuming that the wave field is completely diffuse and have attempted to apply the theory to understand the observed microtremor data. It is anticipated that this new theory can be applied to detect the subsurface velocity structure beneath urban area. Precise information about the subsurface velocity structure is essential for predicting strong ground motion accurately, which is necessary to mitigate seismic disaster. Mexico basin, who witnessed severe damage during the 1985 Michoacán Earthquake (Ms 8.1) several hundreds of kilometers away from the source region, is an interesting location in which the reassessment of soil properties is urgent. Because of subsidence, having improved estimates of properties is mandatory. In order to estimate possible changes in the velocity structure in the Mexico basin, we measured microtremors at strong motion observation sites in Mexico City. At those sites, information about the velocity profiles are available. Using the obtained data, we derive observed H/V spectral ratio and compare it with the theoretical H/V spectral ratio to gauge the goodness of our new theory. First we compared the observed H/V spectral ratios for five stations to see the diverse characteristics of this measurement. Then we compared the observed H/V spectral ratios with the theoretical predictions to confirm our theory. We assumed the velocity model of previous surveys at the strong motions observation sites as an initial model. We were able to closely fit both the peak frequency and amplitude of the observed H/V spectral ratio, by the theoretical H/V spectral ratio calculated by our new method. These results show that we have a good initial model. However, the theoretical estimates need some improvement to perfectly fit the observed H/V spectral ratio. This may be an indication that the initial model needs some adjustments. We explore how to improve the velocity model based on the comparison between observations and theory.

  12. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  13. Evaluating rainfall kinetic energy - intensity relationships with observed disdrometric data

    NASA Astrophysics Data System (ADS)

    Angulo-Martinez, Marta; Begueria, Santiago; Latorre, Borja

    2016-04-01

    Rainfall kinetic energy is required for determining erosivity, the ability of rainfall to detach soil particles and initiate erosion. Its determination relay on the use of disdrometers, i.e. devices capable of measuring the drop size distribution and velocity of falling raindrops. In the absence of such devices, rainfall kinetic energy is usually estimated with empirical expressions relating rainfall energy and intensity. We evaluated the performance of 14 rainfall energy equations in estimating one-minute rainfall energy and event total energy, in comparison with observed data from 821 rainfall episodes (more than 100 thousand one-minute observations) by means of an optical disdrometer. In addition, two sources of bias when using such relationships were evaluated: i) the influence of using theoretical terminal raindrop fall velocities instead of measured values; and ii) the influence of time aggregation (rainfall intensity data every 5-, 10-, 15-, 30-, and 60-minutes). Empirical relationships did a relatively good job when complete events were considered (R2 > 0.82), but offered poorer results for within-event (one-minute resolution) variation. Also, systematic biases where large for many equations. When raindrop size distribution was known, estimating the terminal fall velocities by empirical laws produced good results even at fine time resolution. The influence of time aggregation was very high in the estimated kinetic energy, although linear scaling may allow empirical correction. This results stress the importance of considering all these effects when rainfall energy needs to be estimated from more standard precipitation records. , and recommends the use of disdrometer data to locally determine rainfall kinetic energy.

  14. Effect of forward speed on the roll damping of three small fishing vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haddara, M.R.; Zhang, S.

    1994-05-01

    An extensive experimental program has been carried out to estimate roll damping parameters for three models of fishing vessels having different hull shapes and moving with forward speed. Roll damping parameters are determined using a novel method. This method combines the energy method and the modulating function method. The effect of forward speed, initial heel angle and the natural frequency on damping is discussed. A modification of Ikeda's formula for lift damping prediction is suggested. The modified formula produces results which are in good agreement with the experiments.

  15. Estimation of temperature in micromaser-type systems

    NASA Astrophysics Data System (ADS)

    Farajollahi, B.; Jafarzadeh, M.; Rangani Jahromi, H.; Amniat-Talab, M.

    2018-06-01

    We address the estimation of the number of photons and temperature in a micromaser-type system with Fock state and thermal fields. We analyze the behavior of the quantum Fisher information (QFI) for both fields. In particular, we show that in the Fock state field model, the QFI for non-entangled initial state of the atoms increases monotonously with time, while for entangled initial state of the atoms, it shows oscillatory behavior, leading to non-Markovian dynamics. Moreover, it is observed that the QFI, entropy of entanglement and fidelity have collapse and revival behavior. Focusing on each period that the collapses and revivals occur, we see that the optimal points of the QFI and entanglement coincide. In addition, when one of the subsystems evolved state fidelity becomes maximum, the QFI also achieves its maximum. We also address the evolved fidelity versus the initial state as a good witness of non-Markovianity. Moreover, we interestingly find that the entropy of the composite system can be used as a witness of non-Markovian evolution of the subsystems. For the thermal field model, we similarly investigate the relation among the QFI associated with the temperature, von Neumann entropy, and fidelity. In particular, it is found that at the instants when the maximum values of the QFI are achieved, the entanglement between the two-qubit system and the environment is maximized while the entanglement between the probe and its environment is minimized. Moreover, we show that the thermometry may lead to optimal estimation of practical temperatures. Besides, extending our computation to the two-qubit system, we find that using a two-qubit probe generally leads to more effective estimation than the one-qubit scenario. Finally, we show that initial state entanglement plays a key role in the advent of non-Markovianity and determination of its strength in the composite system and its subsystems.

  16. Left ventricular volume estimation in cardiac three-dimensional ultrasound: a semiautomatic border detection approach.

    PubMed

    van Stralen, Marijn; Bosch, Johan G; Voormolen, Marco M; van Burken, Gerard; Krenning, Boudewijn J; van Geuns, Robert-Jan M; Lancée, Charles T; de Jong, Nico; Reiber, Johan H C

    2005-10-01

    We propose a semiautomatic endocardial border detection method for three-dimensional (3D) time series of cardiac ultrasound (US) data based on pattern matching and dynamic programming, operating on two-dimensional (2D) slices of the 3D plus time data, for the estimation of full cycle left ventricular volume, with minimal user interaction. The presented method is generally applicable to 3D US data and evaluated on data acquired with the Fast Rotating Ultrasound (FRU-) Transducer, developed by Erasmus Medical Center (Rotterdam, the Netherlands), a conventional phased-array transducer, rotating at very high speed around its image axis. The detection is based on endocardial edge pattern matching using dynamic programming, which is constrained by a 3D plus time shape model. It is applied to an automatically selected subset of 2D images of the original data set, for typically 10 equidistant rotation angles and 16 cardiac phases (160 images). Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastole and end-systole volumes. Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastolic (ED) and end-systolic (ES) volumes. The semiautomatic border detection approach shows good correlations with MRI ED/ES volumes (r = 0.938) and low interobserver variability (y = 1.005x - 16.7, r = 0.943) over full-cycle volume estimations. It shows a high consistency in tracking the user-defined initial borders over space and time. We show that the ease of the acquisition using the FRU-transducer and the semiautomatic endocardial border detection method together can provide a way to quickly estimate the left ventricular volume over the full cardiac cycle using little user interaction.

  17. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  18. Why the MDGs need good governance in pharmaceutical systems to promote global health.

    PubMed

    Kohler, Jillian Clare; Mackey, Tim Ken; Ovtcharenko, Natalia

    2014-01-21

    Corruption in the health sector can hurt health outcomes. Improving good governance can in turn help prevent health-related corruption. We understand good governance as having the following characteristics: it is consensus-oriented, accountable, transparent, responsive, equitable and inclusive, effective and efficient, follows the rule of law, is participatory and should in theory be less vulnerable to corruption. By focusing on the pharmaceutical system, we explore some of the key lessons learned from existing initiatives in good governance. As the development community begins to identify post-2015 Millennium Development Goals targets, it is essential to evaluate programs in good governance in order to build on these results and establish sustainable strategies. This discussion on the pharmaceutical system illuminates why. Considering pharmaceutical governance initiatives such as those launched by the World Bank, World Health Organization, and the Global Fund, we argue that country ownership of good governance initiatives is essential but also any initiative must include the participation of impartial stakeholders. Understanding the political context of any initiative is also vital so that potential obstacles are identified and the design of any initiative is flexible enough to make adjustments in programming as needed. Finally, the inherent challenge which all initiatives face is adequately measuring outcomes from any effort. However in fairness, determining the precise relationship between good governance and health outcomes is rarely straightforward. Challenges identified in pharmaceutical governance initiatives manifest in different forms depending on the nature and structure of the initiative, but their regular occurrence and impact on population-based health demonstrates growing importance of addressing pharmaceutical governance as a key component of the post-2015 Millennium Development Goals. Specifically, these challenges need to be acknowledged and responded to with global cooperation and innovation to establish localized and evidence-based metrics for good governance to promote global pharmaceutical safety.

  19. Why the MDGs need good governance in pharmaceutical systems to promote global health

    PubMed Central

    2014-01-01

    Background Corruption in the health sector can hurt health outcomes. Improving good governance can in turn help prevent health-related corruption. We understand good governance as having the following characteristics: it is consensus-oriented, accountable, transparent, responsive, equitable and inclusive, effective and efficient, follows the rule of law, is participatory and should in theory be less vulnerable to corruption. By focusing on the pharmaceutical system, we explore some of the key lessons learned from existing initiatives in good governance. As the development community begins to identify post-2015 Millennium Development Goals targets, it is essential to evaluate programs in good governance in order to build on these results and establish sustainable strategies. This discussion on the pharmaceutical system illuminates why. Discussion Considering pharmaceutical governance initiatives such as those launched by the World Bank, World Health Organization, and the Global Fund, we argue that country ownership of good governance initiatives is essential but also any initiative must include the participation of impartial stakeholders. Understanding the political context of any initiative is also vital so that potential obstacles are identified and the design of any initiative is flexible enough to make adjustments in programming as needed. Finally, the inherent challenge which all initiatives face is adequately measuring outcomes from any effort. However in fairness, determining the precise relationship between good governance and health outcomes is rarely straightforward. Summary Challenges identified in pharmaceutical governance initiatives manifest in different forms depending on the nature and structure of the initiative, but their regular occurrence and impact on population-based health demonstrates growing importance of addressing pharmaceutical governance as a key component of the post-2015 Millennium Development Goals. Specifically, these challenges need to be acknowledged and responded to with global cooperation and innovation to establish localized and evidence-based metrics for good governance to promote global pharmaceutical safety. PMID:24447600

  20. 24 CFR Appendix C to Part 3500 - Instructions for Completing Good Faith Estimate (GFE) Form

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Instructions for Completing Good Faith Estimate (GFE) Form C Appendix C to Part 3500 Housing and Urban Development Regulations Relating.... 3500, App. C Appendix C to Part 3500—Instructions for Completing Good Faith Estimate (GFE) Form The...

  1. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  2. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  3. Determination of time zero from a charged particle detector

    DOEpatents

    Green, Jesse Andrew [Los Alamos, NM

    2011-03-15

    A method, system and computer program is used to determine a linear track having a good fit to a most likely or expected path of charged particle passing through a charged particle detector having a plurality of drift cells. Hit signals from the charged particle detector are associated with a particular charged particle track. An initial estimate of time zero is made from these hit signals and linear tracks are then fit to drift radii for each particular time-zero estimate. The linear track having the best fit is then searched and selected and errors in fit and tracking parameters computed. The use of large and expensive fast detectors needed to time zero in the charged particle detectors can be avoided by adopting this method and system.

  4. Estimating Dense Cardiac 3D Motion Using Sparse 2D Tagged MRI Cross-sections*

    PubMed Central

    Ardekani, Siamak; Gunter, Geoffrey; Jain, Saurabh; Weiss, Robert G.; Miller, Michael I.; Younes, Laurent

    2015-01-01

    In this work, we describe a new method, an extension of the Large Deformation Diffeomorphic Metric Mapping to estimate three-dimensional deformation of tagged Magnetic Resonance Imaging Data. Our approach relies on performing non-rigid registration of tag planes that were constructed from set of initial reference short axis tag grids to a set of deformed tag curves. We validated our algorithm using in-vivo tagged images of normal mice. The mapping allows us to compute root mean square distance error between simulated tag curves in a set of long axis image planes and the acquired tag curves in the same plane. Average RMS error was 0.31±0.36(SD) mm, which is approximately 2.5 voxels, indicating good matching accuracy. PMID:25571140

  5. Statins in Acute Ischemic Stroke: A Systematic Review

    PubMed Central

    Hong, Keun-Sik; Lee, Ji Sung

    2015-01-01

    Background and Purpose Statins have pleiotropic effects of potential neuroprotection. However, because of lack of large randomized clinical trials, current guidelines do not provide specific recommendations on statin initiation in acute ischemic stroke (AIS). The current study aims to systematically review the statin effect in AIS. Methods From literature review, we identified articles exploring prestroke and immediate post-stroke statin effect on imaging surrogate markers, initial stroke severity, functional outcome, and short-term mortality in human AIS. We summarized descriptive overview. In addition, for subjects with available data from publications, we conducted meta-analysis to provide pooled estimates. Results In total, we identified 70 relevant articles including 6 meta-analyses. Surrogate imaging marker studies suggested that statin might enhance collaterals and reperfusion. Our updated meta-analysis indicated that prestroke statin use was associated with milder initial stroke severity (odds ratio [OR] [95% confidence interval], 1.24 [1.05-1.48]; P=0.013), good functional outcome (1.50 [1.29-1.75]; P<0.001), and lower mortality (0.42 [0.21-0.82]; P=0.0108). In-hospital statin use was associated with good functional outcome (1.31 [1.12-1.53]; P=0.001), and lower mortality (0.41 [0.29-0.58]; P<0.001). In contrast, statin withdrawal was associated with poor functional outcome (1.83 [1.01-3.30]; P=0.045). In patients treated with thrombolysis, statin was associated with good functional outcome (1.44 [1.10-1.89]; P=0.001), despite an increased risk of symptomatic hemorrhagic transformation (1.63 [1.04-2.56]; P=0.035). Conclusions The current study findings support the use of statin in AIS. However, the findings were mostly driven by observational studies at risk of bias, and thereby large randomized clinical trials would provide confirmatory evidence. PMID:26437994

  6. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  7. Angiographic assessment of initial balloon angioplasty results.

    PubMed

    Gardiner, Geoffrey A; Sullivan, Kevin L; Halpern, Ethan J; Parker, Laurence; Beck, Margaret; Bonn, Joseph; Levin, David C

    2004-10-01

    To determine the influence of three factors involved in the angiographic assessment of balloon angioplasty-interobserver variability, operator bias, and the definition used to determine success-on the primary (technical) results of angioplasty in the peripheral arteries. Percent stenosis in 107 lesions in lower-extremity arteries was graded by three independent, experienced vascular radiologists ("observers") before and after balloon angioplasty and their estimates were compared with the initial interpretations reported by the physician performing the procedure ("operator") and an automated quantitative computer analysis. Observer variability was measured with use of intraclass correlation coefficients and SD. Differences among the operator, observers, and the computer were analyzed with use of the Wilcoxon signed-rank test and analysis of variance. For each evaluator, the results in this series of lesions were interpreted with three different definitions of success. Estimation of residual stenosis varied by an average range of 22.76% with an average SD of 8.99. The intraclass correlation coefficients averaged 0.59 for residual stenosis after angioplasty for the three observers but decreased to 0.36 when the operator was included as the fourth evaluator. There was good to very good agreement among the three independent observers and the computer, but poor correlation with the operator (P

  8. Does a satisfactory relationship with her mother influence when a 16-year-old begins to have sex?

    PubMed

    Kovar, Cheryl L; Salsberry, Pamela J

    2012-01-01

    To examine aspects of the mother-daughter relationship as perceived by the 16-year-old (cohesion, flexibility, communication, monitoring, and satisfaction with time spent together) as they relate to when the daughter began having sex. A secondary analysis using data from the National Longitudinal Survey of Youth Child (1992-2000) and Young Adult (1996-2004) surveys were analyzed (N = 1,592). Logistic regression models estimated reports of cohesion, flexibility, communication, monitoring, and satisfaction with time spent together with sexual initiation by age 16. All models controlled for the mother's sociodemographic characteristics, lack of independence due to sisters in the sample, and extended time away from mother. Girls who reported being satisfied with the amount of time spent with their mother were less likely to report early sexual initiation. In addition, these girls were three times more likely to report good communication and four times more likely to report high levels of cohesion with their mothers. Individually, in addition to satisfaction with time spent together, high levels of cohesion and good communication were also associated with lower reports of sexual initiation by age 16. The feeling of being satisfied with the time spent together appears to be a global measure of the individual dimensions of cohesion and communication. Efforts in delaying sexual initiation in adolescents need to be directed at the mother-daughter relationship. Interventions to develop these dimensions within the relationship during early adolescence, as compared to interventions when sexual activity may have already occurred, are warranted.

  9. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Legal Provisions, Educational Services and Health Care Across the Lifespan for Autism Spectrum Disorders in India.

    PubMed

    Barua, Merry; Kaushik, Jaya Shankar; Gulati, Sheffali

    2017-01-01

    India is estimated to have over 10 million persons with autism. Rising awareness of autism in India over last decade with ready access to information has led to an increase in prevalence and earlier diagnosis, the creation of services and some policy initiatives. However, there remains a gaping chasm between policy and implementation. The reach and quality of services continues sketchy and uneven, especially in the area of education. The present review discusses existing legal provisions for children and adults with autism in India. It also discusses Governmental efforts and lacunae in existing health care facilities and education services in India. While there are examples of good practice and stories of hope, strong policy initiatives have to support grassroots action to improve the condition of persons with autism in India.

  11. Back-Face Strain for Monitoring Stable Crack Extension in Precracked Flexure Specimens

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Ghosn, Louis J.

    2010-01-01

    Calibrations relating back-face strain to crack length in precracked flexure specimens were developed for different strain gage sizes. The functions were verified via experimental compliance measurements of notched and precracked ceramic beams. Good agreement between the functions and experiments occurred, and fracture toughness was calculated via several operational methods: maximum test load and optically measured precrack length; load at 2 percent crack extension and optical precrack length; maximum load and back-face strain crack length. All the methods gave vary comparable results. The initiation toughness, K(sub Ii) , was also estimated from the initial compliance and load.The results demonstrate that stability of precracked ceramics specimens tested in four-point flexure is a common occurrence, and that methods such as remotely-monitored load-point displacement are only adequate for detecting stable extension of relatively deep cracks.

  12. Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists.

    PubMed

    Weller, J M; Bloch, M; Young, S; Maze, M; Oyesola, S; Wyner, J; Dob, D; Haire, K; Durbridge, J; Walker, T; Newble, D

    2003-01-01

    There is increasing emphasis on performance-based assessment of clinical competence. The High Fidelity Patient Simulator (HPS) may be useful for assessment of clinical practice in anaesthesia, but needs formal evaluation of validity, reliability, feasibility and effect on learning. We set out to assess the reliability of a global rating scale for scoring simulator performance in crisis management. Using a global rating scale, three judges independently rated videotapes of anaesthetists in simulated crises in the operating theatre. Five anaesthetists then independently rated subsets of these videotapes. There was good agreement between raters for medical management, behavioural attributes and overall performance. Agreement was high for both the initial judges and the five additional raters. Using a global scale to assess simulator performance, we found good inter-rater reliability for scoring performance in a crisis. We estimate that two judges should provide a reliable assessment. High fidelity simulation should be studied further for assessing clinical performance.

  13. Improvements in aircraft extraction programs

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.; Maine, R. E.

    1976-01-01

    Flight data from an F-8 Corsair and a Cessna 172 was analyzed to demonstrate specific improvements in the LRC parameter extraction computer program. The Cramer-Rao bounds were shown to provide a satisfactory relative measure of goodness of parameter estimates. It was not used as an absolute measure due to an inherent uncertainty within a multiplicative factor, traced in turn to the uncertainty in the noise bandwidth in the statistical theory of parameter estimation. The measure was also derived on an entirely nonstatistical basis, yielding thereby also an interpretation of the significance of off-diagonal terms in the dispersion matrix. The distinction between coefficients as linear and non-linear was shown to be important in its implication to a recommended order of parameter iteration. Techniques of improving convergence generally, were developed, and tested out on flight data. In particular, an easily implemented modification incorporating a gradient search was shown to improve initial estimates and thus remove a common cause for lack of convergence.

  14. A Bayesian model for estimating population means using a link-tracing sampling design.

    PubMed

    St Clair, Katherine; O'Connell, Daniel

    2012-03-01

    Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.

  15. A novel approach for estimating sugar and alcohol concentrations in wines using refractometer and hydrometer.

    PubMed

    Son, H S; Hong, Y S; Park, W M; Yu, M A; Lee, C H

    2009-03-01

    To estimate true Brix and alcoholic strength of must and wines without distillation, a novel approach using a refractometer and a hydrometer was developed. Initial Brix (I.B.), apparent refractometer Brix (A.R.), and apparent hydrometer Brix (A.H.) of must were measured by refractometer and hydrometer, respectively. Alcohol content (A) was determined with a hydrometer after distillation and true Brix (T.B.) was measured in distilled wines using a refractometer. Strong proportional correlations among A.R., A.H., T.B., and A in sugar solutions containing varying alcohol concentrations were observed in preliminary experiments. Similar proportional relationships among the parameters were also observed in must, which is a far more complex system than the sugar solution. To estimate T.B. and A of must during alcoholic fermentation, a total of 6 planar equations were empirically derived from the relationships among the experimental parameters. The empirical equations were then tested to estimate T.B. and A in 17 wine products, and resulted in good estimations of both quality factors. This novel approach was rapid, easy, and practical for use in routine analyses or for monitoring quality of must during fermentation and final wine products in a winery and/or laboratory.

  16. Reduction of uncertainty for estimating runoff with the NRCS CN model by the adaptation to local climatic conditions

    NASA Astrophysics Data System (ADS)

    Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.

    2016-04-01

    Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows to identify, forecast and explain watershed response. For that purpose, the Natural Resources Conservation Service Curve Number method (NRCS CN) is the conceptual lumped model more recognized in the field of rainfall-runoff estimation. Furthermore, there is still an ongoing discussion about the procedure to determine the portion of rainfall retained in the watershed before runoff is generated, called as initial abstractions. This concept is computed as a ratio (λ) of the soil potential maximum retention S of the watershed. Initially, this ratio was assumed to be 0.2, but later it has been proposed to be modified to 0.05. However, the actual procedures to convert NRCS CN model parameters obtained under a different hypothesis about λ do not incorporate any adaptation of climatic conditions of each watershed. By this reason, we propose a new simple method for computing model parameters which is adapted to local conditions taking into account regional patterns of climate conditions. After checking the goodness of this procedure against the actual ones in 34 different watersheds located in Ohio and Texas (United States), we concluded that this novel methodology represents the most accurate and efficient alternative to refit the initial abstraction ratio.

  17. Automated external defibrillators in schools?

    PubMed

    Cornelis, Charlotte; Calle, Paul; Mpotos, Nicolas; Monsieurs, Koenraad

    2015-06-01

    Automated external defibrillators (AEDs) placed in public locations can save lives of cardiac arrest victims. In this paper, we try to estimate the cost-effectiveness of AED placement in Belgian schools. This would allow school policy makers to make an evidence-based decision about an on-site AED project. We developed a simple mathematical model containing literature data on the incidence of cardiac arrest with a shockable rhythm; the feasibility and effectiveness of defibrillation by on-site AEDs and the survival benefit. This was coupled to a rough estimation of the minimal costs to initiate an AED project. According to the model described above, AED projects in all Belgian schools may save 5 patients annually. A rough estimate of the minimal costs to initiate an AED project is 660 EUR per year. As there are about 6000 schools in Belgium, a national AED project in all schools would imply an annual cost of at least 3960 000 EUR, resulting in 5 lives saved. As our literature survey shows that AED use in schools is feasible and effective, the placement of these devices in all Belgian schools is undoubtedly to be considered. The major counter-arguments are the very low incidence and the high costs to set up a school-based AED programme. Our review may fuel the discussion about Whether or not school-based AED projects represent good value for money and should be preferred above other health care interventions.

  18. Bearings Only Air-to-Air Ranging

    DTIC Science & Technology

    1988-07-25

    directly in fiut of the observer whem first detected, more time will be needed for a good estimate. A sound uinp them is for the observer, having...altitude angle to provide an estimate of the z component. Moving targets commonly require some 60 seconds for good estimates of target location and...fixed target case, where a good strategy for the observer can be determined a priori, highly effective maneuvers for the observer in the case of a moving

  19. Estimation of arterial input by a noninvasive image derived method in brain H2 15O PET study: confirmation of arterial location using MR angiography

    NASA Astrophysics Data System (ADS)

    Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko

    2017-06-01

    A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92  ±  0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.

  20. A state space based approach to localizing single molecules from multi-emitter images.

    PubMed

    Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J

    2017-01-28

    Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.

  1. The Initial Mass Function in the Nearest Strong Lenses from SNELLS: Assessing the Consistency of Lensing, Dynamical, and Spectroscopic Constraints

    NASA Astrophysics Data System (ADS)

    Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter

    2017-08-01

    We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high-σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M */L using lensing and stellar dynamics. We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy & van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2-3σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M ⊙. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2σ. There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched-σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses (m ≲ 0.3 M ⊙), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M */L relative to the mean matched-σ ETG. We provide the spectra used in this study to facilitate future comparisons.

  2. The Initial Mass Function in the Nearest Strong Lenses from SNELLS: Assessing the Consistency of Lensing, Dynamical, and Spectroscopic Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie

    2017-08-20

    We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high- σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M {sub *}/ L using lensing and stellar dynamics.more » We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy and van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2–3 σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M {sub ⊙}. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2 σ . There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched- σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses ( m ≲ 0.3 M {sub ⊙}), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M {sub *}/ L relative to the mean matched- σ ETG. We provide the spectra used in this study to facilitate future comparisons.« less

  3. 49 CFR 375.409 - May household goods brokers provide estimates?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REGULATIONS TRANSPORTATION OF HOUSEHOLD GOODS IN INTERSTATE COMMERCE; CONSUMER PROTECTION REGULATIONS... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...

  4. Leachate concentrations from water leach and column leach tests on fly ash-stabilized soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bin-Shafique, S.; Benson, C.H.; Edil, T.B.

    2006-01-15

    Batch water leaching tests (WLTs) and column leaching tests (CLTs) were conducted on coal-combustion fly ashes, soil, and soil-fly ash mixtures to characterize leaching of Cd, Cr, Se, and Ag. The concentrations of these metals were also measured in the field at two sites where soft fine-grained soils were mechanically stabilized with fly ash. Concentrations in leachate from the WLTs on soil-fly ash mixtures are different from those on fly ash alone and cannot be accurately estimated based on linear dilution calculations using concentrations from WLTs on fly ash alone. The concentration varies nonlinearly with fly ash content due tomore » the variation in pH with fly ash content. Leachate concentrations are low when the pH of the leachate or the cation exchange capacity (CEC) of the soil is high. Initial concentrations from CLTs are higher than concentrations from WLTs due to differences in solid-liquid ratio, pH, and solid-liquid contact. However, both exhibit similar trends with fly ash content, leachate pH, and soil properties. Scaling factors can be applied to WLT concentrations (50 for Ag and Cd, 10 for Cr and Se) to estimate initial concentrations for CLTs. Concentrations in leachate collected from the field sites were generally similar or slightly lower than concentrations measured in CLTs on the same materials. Thus, CLTs appear to provide a good indication of conditions that occur in the field provided that the test conditions mimic the field conditions. In addition, initial concentrations in the field can be conservatively estimated from WLT concentrations using the aforementioned scaling factors provided that the pH of the infiltrating water is near neutral.« less

  5. Assessment of initial soil moisture conditions for event-based rainfall-runoff modelling

    NASA Astrophysics Data System (ADS)

    Tramblay, Yves; Bouvier, Christophe; Martin, Claude; Didon-Lescot, Jean-François; Todorovik, Dragana; Domergue, Jean-Marc

    2010-06-01

    Flash floods are the most destructive natural hazards that occur in the Mediterranean region. Rainfall-runoff models can be very useful for flash flood forecasting and prediction. Event-based models are very popular for operational purposes, but there is a need to reduce the uncertainties related to the initial moisture conditions estimation prior to a flood event. This paper aims to compare several soil moisture indicators: local Time Domain Reflectometry (TDR) measurements of soil moisture, modelled soil moisture through the Interaction-Sol-Biosphère-Atmosphère (ISBA) component of the SIM model (Météo-France), antecedent precipitation and base flow. A modelling approach based on the Soil Conservation Service-Curve Number method (SCS-CN) is used to simulate the flood events in a small headwater catchment in the Cevennes region (France). The model involves two parameters: one for the runoff production, S, and one for the routing component, K. The S parameter can be interpreted as the maximal water retention capacity, and acts as the initial condition of the model, depending on the antecedent moisture conditions. The model was calibrated from a 20-flood sample, and led to a median Nash value of 0.9. The local TDR measurements in the deepest layers of soil (80-140 cm) were found to be the best predictors for the S parameter. TDR measurements averaged over the whole soil profile, outputs of the SIM model, and the logarithm of base flow also proved to be good predictors, whereas antecedent precipitations were found to be less efficient. The good correlations observed between the TDR predictors and the S calibrated values indicate that monitoring soil moisture could help setting the initial conditions for simplified event-based models in small basins.

  6. Estimation of Length-Scales in Soils by MRI

    NASA Technical Reports Server (NTRS)

    Daidzic, N. E.; Altobelli, S.; Alexander, J. I. D.

    2004-01-01

    Soil can be best described as an unconsolidated granular media that forms porous structure. The present macroscopic theory of water transport in porous media rests upon the continuum hypothesis that the physical properties of porous media can be associated with continuous, twice-differentiable field variables whose spatial domain is a set of centroids of Representative Elementary Volume (REV) elements. MRI is an ideal technique to estimate various length-scales in porous media. A 0.267 T permanent magnet at NASA GRC was used for this study. A 2D or 3D spatially-resolved porosity distribution were obtained from the NMR signal strength from each voxel and the spin-lattice relaxation time. A classical spin-warp imaging with Multiple Spin Echos (MSE) was used to evaluate proton density in each voxel. Initial resolution of 256 x 256 was subsequently reduced by averaging neighboring voxels and the porosity convergence was observed. A number of engineered "space candidate" soils such as Isolite(trademark), Zeoponics(trademark), Turface(trademark), and Profile(trademark) were used. Glass beads in the size range between 50 microns to 2 mm were used as well. Initial results with saturated porous samples have shown a good estimate of the average porosity consistent with the gravimetric porosity measurement results. For Profile(trademark) samples with particle sizes ranging between 0.25 to 1 mm and characteristic interparticle pore size of 100 microns the characteristic Darcy scale was estimated to be about delta(sub REV) = 10 mm. Glass beads porosity show clear convergence toward a definite REV which stays constant throughout homogeneous sample. Additional information is included in the original extended abstract.

  7. A simple Lagrangian forecast system with aviation forecast potential

    NASA Technical Reports Server (NTRS)

    Petersen, R. A.; Homan, J. H.

    1983-01-01

    A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.

  8. Veterans Affairs facility performance on Washington Circle indicators and casemix-adjusted effectiveness.

    PubMed

    Harris, Alex H S; Humphreys, Keith; Finney, John W

    2007-12-01

    Self-administered Addiction Severity Index (ASI) data were collected on 5,723 patients who received substance abuse treatment in 1 of 110 programs located at 73 Veterans Affairs facilities. The associations between each of three Washington Circle (WC) performance indicator scores (identification, initiation, and engagement) and their casemix-adjusted facility-level improvement in ASI drug and alcohol composites 7 months after intake were estimated. Higher initiation rates were not associated with facility-level improvement in ASI alcohol composite scores but were modestly associated with greater improvements in ASI drug composite scores. Identification and engagement rates were unrelated to 7-month outcomes. WC indicators focused on the early stages of treatment may tap necessary but insufficient processes for patients with substance use disorder to achieve good posttreatment outcomes. Ideally, the WC indicators would be supplemented with other measures of treatment quality.

  9. An Introduction to Goodness of Fit for PMU Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riepnieks, Artis; Kirkham, Harold

    2017-10-01

    New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable evenmore » with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.« less

  10. Estimating Premorbid Cognitive Abilities in Low-Educated Populations

    PubMed Central

    Apolinario, Daniel; Brucki, Sonia Maria Dozzi; Ferretti, Renata Eloah de Lucena; Farfel, José Marcelo; Magaldi, Regina Miksian; Busse, Alexandre Leopold; Jacob-Filho, Wilson

    2013-01-01

    Objective To develop an informant-based instrument that would provide a valid estimate of premorbid cognitive abilities in low-educated populations. Methods A questionnaire was drafted by focusing on the premorbid period with a 10-year time frame. The initial pool of items was submitted to classical test theory and a factorial analysis. The resulting instrument, named the Premorbid Cognitive Abilities Scale (PCAS), is composed of questions addressing educational attainment, major lifetime occupation, reading abilities, reading habits, writing abilities, calculation abilities, use of widely available technology, and the ability to search for specific information. The validation sample was composed of 132 older Brazilian adults from the following three demographically matched groups: normal cognitive aging (n = 72), mild cognitive impairment (n = 33), and mild dementia (n = 27). The scores of a reading test and a neuropsychological battery were adopted as construct criteria. Post-mortem inter-informant reliability was tested in a sub-study with two relatives from each deceased individual. Results All items presented good discriminative power, with corrected item-total correlation varying from 0.35 to 0.74. The summed score of the instrument presented high correlation coefficients with global cognitive function (r = 0.73) and reading skills (r = 0.82). Cronbach's alpha was 0.90, showing optimal internal consistency without redundancy. The scores did not decrease across the progressive levels of cognitive impairment, suggesting that the goal of evaluating the premorbid state was achieved. The intraclass correlation coefficient was 0.96, indicating excellent inter-informant reliability. Conclusion The instrument developed in this study has shown good properties and can be used as a valid estimate of premorbid cognitive abilities in low-educated populations. The applicability of the PCAS, both as an estimate of premorbid intelligence and cognitive reserve, is discussed. PMID:23555894

  11. TRUE MASSES OF RADIAL-VELOCITY EXOPLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Robert A., E-mail: rbrown@stsci.edu

    We study the task of estimating the true masses of known radial-velocity (RV) exoplanets by means of direct astrometry on coronagraphic images to measure the apparent separation between exoplanet and host star. Initially, we assume perfect knowledge of the RV orbital parameters and that all errors are due to photon statistics. We construct design reference missions for four missions currently under study at NASA: EXO-S and WFIRST-S, with external star shades for starlight suppression, EXO-C and WFIRST-C, with internal coronagraphs. These DRMs reveal extreme scheduling constraints due to the combination of solar and anti-solar pointing restrictions, photometric and obscurational completeness,more » image blurring due to orbital motion, and the “nodal effect,” which is the independence of apparent separation and inclination when the planet crosses the plane of the sky through the host star. Next, we address the issue of nonzero uncertainties in RV orbital parameters by investigating their impact on the observations of 21 single-planet systems. Except for two—GJ 676 A b and 16 Cyg B b, which are observable only by the star-shade missions—we find that current uncertainties in orbital parameters generally prevent accurate, unbiased estimation of true planetary mass. For the coronagraphs, WFIRST-C and EXO-C, the most likely number of good estimators of true mass is currently zero. For the star shades, EXO-S and WFIRST-S, the most likely numbers of good estimators are three and four, respectively, including GJ 676 A b and 16 Cyg B b. We expect that uncertain orbital elements currently undermine all potential programs of direct imaging and spectroscopy of RV exoplanets.« less

  12. Computing the electric field from extensive air showers using a realistic description of the atmosphere

    NASA Astrophysics Data System (ADS)

    Gaté, F.; Revenu, B.; García-Fernández, D.; Marin, V.; Dallier, R.; Escudié, A.; Martin, L.

    2018-03-01

    The composition of ultra-high energy cosmic rays is still poorly known and constitutes a very important topic in the field of high-energy astrophysics. Detection of ultra-high energy cosmic rays is carried out via the extensive air showers they create after interacting with the atmosphere constituents. The secondary electrons and positrons within the showers emit a detectable electric field in the kHz-GHz range. It is possible to use this radio signal for the estimation of the atmospheric depth of maximal development of the showers Xmax , with a good accuracy and a duty cycle close to 100%. This value of Xmax is strongly correlated to the nature of the primary cosmic ray that initiated the shower. We show in this paper the importance of using a realistic atmospheric model in order to correct for systematic errors that can prevent a correct and unbiased estimation of Xmax.

  13. Review and statistical analysis of the use of ultrasonic velocity for estimating the porosity fraction in polycrystalline materials

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.

    1991-01-01

    A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semiempirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produces predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis of fully-dense materials are in good agreement with those calculated from elastic properties.

  14. Job Stress in the United Kingdom: Are Small and Medium-Sized Enterprises and Large Enterprises Different?

    PubMed

    Lai, Yanqing; Saridakis, George; Blackburn, Robert

    2015-08-01

    This paper examines the relationships between firm size and employees' experience of work stress. We used a matched employer-employee dataset (Workplace Employment Relations Survey 2011) that comprises of 7182 employees from 1210 private organizations in the United Kingdom. Initially, we find that employees in small and medium-sized enterprises experience lower level of overall job stress than those in large enterprises, although the effect disappears when we control for individual and organizational characteristics in the model. We also find that quantitative work overload, job insecurity and poor promotion opportunities, good work relationships and poor communication are strongly associated with job stress in the small and medium-sized enterprises, whereas qualitative work overload, poor job autonomy and employee engagements are more related with larger enterprises. Hence, our estimates show that the association and magnitude of estimated effects differ significantly by enterprise size. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.

    1990-01-01

    A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semi-empirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produced predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis for fully-dense materials are in good agreement with those calculated from elastic properties.

  16. Learning about new products: an empirical study of physicians' behavior.

    PubMed

    Ferreyra, Maria Marta; Kosenok, Grigory

    2011-01-01

    We develop and estimate a model of market demand for a new pharmaceutical, whose quality is learned through prescriptions by forward-looking physicians. We use a panel of antiulcer prescriptions from Italian physicians between 1990 and 1992 and focus on a new molecule available since 1990. We solve the model by calculating physicians' optimal decision rules as functions of their beliefs about the new pharmaceutical. According to our counterfactuals, physicians' initial pessimism and uncertainty can have large, negative effects on their propensity to prescribe the new drug and on expected health outcomes. In contrast, subsidizing the new good can mitigate informational losses.

  17. Very High Cycle Fatigue Behavior of a Directionally Solidified Ni-Base Superalloy DZ4

    PubMed Central

    Nie, Baohua; Zhao, Zihua; Liu, Shu; Chen, Dongchu; Ouyang, Yongzhong; Hu, Zhudong; Fan, Touwen; Sun, Haibo

    2018-01-01

    The effect of casting pores on the very high cycle fatigue (VHCF) behavior of a directionally solidified (DS) Ni-base superalloy DZ4 is investigated. Casting and hot isostatic pressing (HIP) specimens were subjected to very high cycle fatigue loading in an ambient atmosphere. The results demonstrated that the continuously descending S-N curves were exhibited for both the casting and HIP specimens. Due to the elimination of the casting pores, the HIP samples had better fatigue properties than the casting samples. The subsurface crack initiated from the casting pore in the casting specimens at low stress amplitudes, whereas fatigue crack initiated from crystallographic facet decohesion for the HIP specimens. When considering the casting pores as initial cracks, there exists a critical stress intensity threshold ranged from 1.1 to 1.3 MPam, below which fatigue cracks may not initiate from the casting pores. Furthermore, the effect of the casting pores on the fatigue limit is estimated based on a modified El Haddad model, which is in good agreement with the experimental results. Fatigue life for both the casting and HIP specimens is well predicted using the Fatigue Indicator Parameter (FIP) model. PMID:29320429

  18. Stable forming conditions and geometrical expansion of L-shape rings in ring rolling process

    NASA Astrophysics Data System (ADS)

    Quagliato, Luca; Berti, Guido A.; Kim, Dongwook; Kim, Naksoo

    2018-05-01

    Based on previous research results concerning the radial-axial ring rolling process of flat rings, this paper details an innovative approach for the determination of the stable forming conditions to successfully simulate the radial ring rolling process of L-shape profiled rings. In addition to that, an analytical model for the estimation of the geometrical expansion of L-shape rings from its initial flat ring preform is proposed and validated by comparing its results with those of numerical simulations. By utilizing the proposed approach, steady forming conditions could be achieved, granting a uniform expansion of the ring throughout the process for all of the six tested cases of rings having the final outer diameter of the flange ranging from 545mm and 1440mm. The validation of the proposed approach allowed concluding that the geometrical expansion of the ring, as estimated by the proposed analytical model, is in good agreement with the results of the numerical simulation, with a maximum error of 2.18%, in the estimation of the ring wall diameter, 1.42% of the ring flange diameter and 1.87% for the estimation of the inner diameter of the ring, respectively.

  19. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  20. Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.

    PubMed

    Li, Liang; Wang, Bigong; Wang, Ge

    2016-01-01

    In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.

  1. Networked localization of sniper shots using acoustics

    NASA Astrophysics Data System (ADS)

    Hengy, S.; Hamery, P.; De Mezzo, S.; Duffner, P.

    2011-06-01

    The presence of snipers in modern conflicts leads to high insecurity for the soldiers. In order to improve the soldier's protection against this threat, the French German Research Institute of Saint-Louis (ISL) initiated studies in the domain of acoustic localization of shots. Mobile antennas mounted on the soldier's helmet were initially used for real-time detection, classification and localization of sniper shots. It showed good performances in land scenarios, but also in urban scenarios if the array was in the shot corridor, meaning that the microphones first detect the direct wave and then the reflections of the Mach and muzzle waves. As soon as the acoustic arrays were not near to the shot corridor (only reflections are detected) this solution lost its efficiency and erroneous estimated position were given. In order to estimate the position of the shooter in every kind of urban scenario, ISL started studying time reversal techniques. Knowing the position of every reflective object in the environment (buildings, walls, ...) it should be possible to estimate the position of the shooter. First, a synthetic propagation algorithm has been developed and validated for real scale applications. It has then been validated for small scale models, allowing us to test our time reversal based algorithms in our laboratory. In this paper we discuss all the challenges that are induced by the application of sniper detection using time reversal techniques. We will discuss all the hard points that can be encountered and try to find some solutions in order to optimize the use of this technique.

  2. Effect of Arrangement of Stick Figures on Estimates of Proportion in Risk Graphics

    PubMed Central

    Ancker, Jessica S.; Weber, Elke U.; Kukafka, Rita

    2017-01-01

    Background Health risks are sometimes illustrated with stick figures, with a certain proportion colored to indicate they are affected by the disease. Perception of these graphics may be affected by whether the affected stick figures are scattered randomly throughout the group or arranged in a block. Objective To assess the effects of stick-figure arrangement on first impressions of estimates of proportion, under a 10-s deadline. Design Questionnaire. Participants and Setting Respondents recruited online (n = 100) or in waiting rooms at an urban hospital (n = 65). Intervention Participants were asked to estimate the proportion represented in 6 unlabeled graphics, half randomly arranged and half sequentially arranged. Measurements Estimated proportions. Results Although average estimates were fairly good, the variability of estimates was high. Overestimates of random graphics were larger than overestimates of sequential ones, except when the proportion was near 50%; variability was also higher with random graphics. Although the average inaccuracy was modest, it was large enough that more than one quarter of respondents confused 2 graphics depicting proportions that differed by 11 percentage points. Low numeracy and educational level were associated with inaccuracy. Limitations Participants estimated proportions but did not report perceived risk. Conclusions Randomly arranged arrays of stick figures should be used with care because viewers’ ability to estimate the proportion in these graphics is so poor that moderate differences between risks may not be visible. In addition, random arrangements may create an initial impression that proportions, especially large ones, are larger than they are. PMID:20671209

  3. A spatial disorientation predictor device to enhance pilot situational awareness regarding aircraft attitude

    NASA Technical Reports Server (NTRS)

    Chelette, T. L.; Repperger, Daniel W.; Albery, W. B.

    1991-01-01

    An effort was initiated at the Armstrong Aerospace Medical Research Laboratory (AAMRL) to investigate the improvement of the situational awareness of a pilot with respect to his aircraft's spatial orientation. The end product of this study is a device to alert a pilot to potentially disorienting situations. Much like a ground collision avoidance system (GCAS) is used in fighter aircraft to alert the pilot to 'pull up' when dangerous flight paths are predicted, this device warns the pilot to put a higher priority on attention to the orientation instrument. A Kalman filter was developed which estimates the pilot's perceived position and orientation. The input to the Kalman filter consists of two classes of data. The first class of data consists of noise parameters (indicating parameter uncertainty), conflict signals (e.g. vestibular and kinesthetic signal disagreement), and some nonlinear effects. The Kalman filter's perceived estimates are now the sum of both Class 1 data (good information) and Class 2 data (distorted information). When the estimated perceived position or orientation is significantly different from the actual position or orientation, the pilot is alerted.

  4. Fitting ordinary differential equations to short time course data.

    PubMed

    Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav

    2008-02-28

    Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.

  5. First Extended Catalogue of Galactic bubble infrared fluxes from WISE and Herschel surveys

    NASA Astrophysics Data System (ADS)

    Bufano, F.; Leto, P.; Carey, D.; Umana, G.; Buemi, C.; Ingallinera, A.; Bulpitt, A.; Cavallaro, F.; Riggi, S.; Trigilio, C.; Molinari, S.

    2018-01-01

    In this paper, we present the first extended catalogue of far-infrared fluxes of Galactic bubbles. Fluxes were estimated for 1814 bubbles, defined here as the 'golden sample', and were selected from the Milky Way Project First Data Release (Simpson et al.) The golden sample was comprised of bubbles identified within the Wide-field Infrared Survey Explorer (WISE) dataset (using 12- and 22-μm images) and Herschel data (using 70-, 160-, 250-, 350- and 500-μm wavelength images). Flux estimation was achieved initially via classical aperture photometry and then by an alternative image analysis algorithm that used active contours. The accuracy of the two methods was tested by comparing the estimated fluxes for a sample of bubbles, made up of 126 H II regions and 43 planetary nebulae, which were identified by Anderson et al. The results of this paper demonstrate that a good agreement between the two was found. This is by far the largest and most homogeneous catalogue of infrared fluxes measured for Galactic bubbles and it is a step towards the fully automated analysis of astronomical datasets.

  6. LAI inversion from optical reflectance using a neural network trained with a multiple scattering model

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    1992-01-01

    The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.

  7. Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression

    NASA Astrophysics Data System (ADS)

    Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.

    2018-05-01

    Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.

  8. Crowdsourcing urban air temperatures through smartphone battery temperatures in São Paulo, Brazil

    NASA Astrophysics Data System (ADS)

    Droste, Arjan; Pape, Jan-Jaap; Overeem, Aart; Leijnse, Hidde; Steeneveld, Gert-Jan; Van Delden, Aarnout; Uijlenhoet, Remko

    2017-04-01

    Crowdsourcing as a method to obtain and apply vast datasets is rapidly becoming prominent in meteorology, especially for urban areas where traditional measurements are scarce. Earlier studies showed that smartphone battery temperature readings allow for estimating the daily and city-wide air temperature via a straightforward heat transfer model. This study advances these model estimations by studying spatially and temporally smaller scales. The accuracy of temperature retrievals as a function of the number of battery readings is also studied. An extensive dataset of over 10 million battery temperature readings is available for São Paulo (Brazil), for estimating hourly and daily air temperatures. The air temperature estimates are validated with air temperature measurements from a WMO station, an Urban Fluxnet site, and crowdsourced data from 7 hobby meteorologists' private weather stations. On a daily basis temperature estimates are good, and we show they improve by optimizing model parameters for neighbourhood scales as categorized in Local Climate Zones. Temperature differences between Local Climate Zones can be distinguished from smartphone battery temperatures. When validating the model for hourly temperature estimates, initial results are poor, but are vastly improved by using a diurnally varying parameter function in the heat transfer model rather than one fixed value for the entire day. The obtained results show the potential of large crowdsourced datasets in meteorological studies, and the value of smartphones as a measuring platform when routine observations are lacking.

  9. Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations

    NASA Astrophysics Data System (ADS)

    Carrassi, A.; Weber, R. J. T.; Guemas, V.; Doblas-Reyes, F. J.; Asif, M.; Volpi, D.

    2014-04-01

    Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate. The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand. Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades. We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed. Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year. Finally, the application of these results in the context of realistic climate models is discussed.

  10. Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances

    NASA Astrophysics Data System (ADS)

    Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani

    2014-05-01

    Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.

  11. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  12. Analysing Twitter and web queries for flu trend prediction.

    PubMed

    Santos, José Carlos; Matos, Sérgio

    2014-05-07

    Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p<0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.

  13. Variability of breathing during wakefulness while using CPAP predicts adherence.

    PubMed

    Fujita, Yukio; Yamauchi, Motoo; Uyama, Hiroki; Kumamoto, Makiko; Koyama, Noriko; Yoshikawa, Masanori; Strohl, Kingman P; Kimura, Hiroshi

    2017-02-01

    The standard therapy for obstructive sleep apnoea (OSA) is continuous positive airway pressure (CPAP) therapy. However, long-term adherence remains at ~50% despite improvements in behavioural and educational interventions. Based on prior work, we explored whether regularity of breathing during wakefulness might be a physiologic predictor of CPAP adherence. Of the 117 consecutive patients who were diagnosed with OSA and prescribed CPAP, 79 CPAP naïve patients were enrolled in this prospective study. During CPAP initiation, respiratory signals were collected using respiratory inductance plethysmography while wearing CPAP during wakefulness in a seated position. Breathing regularity was assessed by the coefficient of variation (CV) for breath-by-breath estimated tidal volume (V T ) and total duration of respiratory cycle (Ttot). In a derivation group (n = 36), we determined the cut-off CV value which predicted poor CPAP adherence at the first month of therapy, and verified the validity of this predetermined cut-off value in the remaining participants (validation group; n = 43). In the derivation group, the CV for estimated V T was significantly higher in patients with poor adherence than with good adherence (median (interquartile range): 44.2 (33.4-57.4) vs 26.0 (20.4-33.2), P < 0.001). The CV cut-off value for estimated V T for poor CPAP adherence was 34.0, according to a receiver-operating characteristic (ROC) curve. In the validation group, the CV value for estimated V T >34.0 confirmed to be predicting poor CPAP adherence (sensitivity, 0.78; specificity, 0.83). At the initiation of therapy, breathing regularity during wakefulness while wearing CPAP is an objective predictor of short-term CPAP adherence. © 2016 Asian Pacific Society of Respirology.

  14. Top 10 metrics for life science software good practices.

    PubMed

    Artaza, Haydee; Chue Hong, Neil; Corpas, Manuel; Corpuz, Angel; Hooft, Rob; Jimenez, Rafael C; Leskošek, Brane; Olivier, Brett G; Stourac, Jan; Svobodová Vařeková, Radka; Van Parys, Thomas; Vaughan, Daniel

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here.

  15. Top 10 metrics for life science software good practices

    PubMed Central

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here. PMID:27635232

  16. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  17. An eight-legged tactile sensor to estimate coefficient of static friction.

    PubMed

    Wei Chen; Rodpongpun, Sura; Luo, William; Isaacson, Nathan; Kark, Lauren; Khamis, Heba; Redmond, Stephen J

    2015-08-01

    It is well known that a tangential force larger than the maximum static friction force is required to initiate the sliding motion between two objects, which is governed by a material constant called the coefficient of static friction. Therefore, knowing the coefficient of static friction is of great importance for robot grippers which wish to maintain a stable and precise grip on an object during various manipulation tasks. Importantly, it is most useful if grippers can estimate the coefficient of static friction without having to explicitly explore the object first, such as lifting the object and reducing the grip force until it slips. A novel eight-legged sensor, based on simplified theoretical principles of friction is presented here to estimate the coefficient of static friction between a planar surface and the prototype sensor. Each of the sensor's eight legs are straight and rigid, and oriented at a specified angle with respect to the vertical, allowing it to estimate one of five ranges (5 = 8/2 + 1) that the coefficient of static friction can occupy. The coefficient of friction can be estimated by determining whether the legs have slipped or not when pressed against a surface. The coefficients of static friction between the sensor and five different materials were estimated and compared to a measurement from traditional methods. A least-squares linear fit of the sensor estimated coefficient showed good correlation with the reference coefficient with a gradient close to one and an r(2) value greater than 0.9.

  18. 76 FR 82311 - Food and Drug Administration Transparency Initiative: Food and Drug Administration Report on Good...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-30

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2009-N-0247] Food and Drug Administration Transparency Initiative: Food and Drug Administration Report on Good Guidance Practices: Improving Efficiency and Transparency; Availability AGENCY: Food and Drug...

  19. Simplified, inverse, ejector design tool

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1993-01-01

    A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.

  20. Inlet-engine matching for SCAR including application of a bicone variable geometry inlet

    NASA Technical Reports Server (NTRS)

    Wasserbauer, J. F.; Gerstenmaier, W. H.

    1978-01-01

    Airflow characteristics of variable cycle engines (VCE) designed for Mach 2.32 can have transonic airflow requirements as high as 1.6 times the cruise airflow. This is a formidable requirement for conventional, high performance, axisymmetric, translating centerbody mixed compression inlets. An alternate inlet is defined, where the second cone of a two cone center body collapses to the initial cone angle to provide a large off-design airflow capability, and incorporates modest centerbody translation to minimize spillage drag. Estimates of transonic spillage drag are competitive with those of conventional translating centerbody inlets. The inlet's cruise performance exhibits very low bleed requirements with good recovery and high angle of attack capability.

  1. Tackling the child malnutrition problem: from what and why to how much and how.

    PubMed

    McLachlan, Milla

    2006-12-01

    There is strong economic evidence to invest in improving the economic status of young children, yet programs remain underresourced. Returns on investment in child nutrition in terms of improved health, better education outcomes and increased productivity are substantial, and cost estimates for effective programs are in the range of $2.8 to $5.3 billion. These amounts are modest when compared with total international development assistance or current spending on luxury goods in wealthy nations. New initiatives to redefine nutrition science and to apply innovative problem-solving technologies to the global nutrition problem suggest that steps are being taken to accelerate progress toward a malnutrition-free world.

  2. Selecting good regions to deblur via relative total variation

    NASA Astrophysics Data System (ADS)

    Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong

    2018-03-01

    Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.

  3. A Tracker for Broken and Closely-Spaced Lines

    DTIC Science & Technology

    1997-10-01

    to combine the current level flow estimate and the previous level flow estimate. However, the result is still not good enough for some reasons. First...geometric attributes are not good enough to discriminate line segments, when they are crowded, parallel and closely-spaced to each other. On the other...level information [10]. Still, it is not good at dealing with closely-spaced line segments. Because it requires a proper size of square neighborhood to

  4. Integrating economic and biophysical data in assessing cost-effectiveness of buffer strip placement.

    PubMed

    Balana, Bedru Babulo; Lago, Manuel; Baggaley, Nikki; Castellazzi, Marie; Sample, James; Stutter, Marc; Slee, Bill; Vinten, Andy

    2012-01-01

    The European Union Water Framework Directive (WFD) requires Member States to set water quality objectives and identify cost-effective mitigation measures to achieve "good status" in all waters. However, costs and effectiveness of measures vary both within and between catchments, depending on factors such as land use and topography. The aim of this study was to develop a cost-effectiveness analysis framework for integrating estimates of phosphorus (P) losses from land-based sources, potential abatement using riparian buffers, and the economic implications of buffers. Estimates of field-by-field P exports and routing were based on crop risk and field slope classes. Buffer P trapping efficiencies were based on literature metadata analysis. Costs of placing buffers were based on foregone farm gross margins. An integrated optimization model of cost minimization was developed and solved for different P reduction targets to the Rescobie Loch catchment in eastern Scotland. A target mean annual P load reduction of 376 kg to the loch to achieve good status was identified. Assuming all the riparian fields initially have the 2-m buffer strip required by the General Binding Rules (part of the WFD in Scotland), the model gave good predictions of P loads (345-481 kg P). The modeling results show that riparian buffers alone cannot achieve the required P load reduction (up to 54% P can be removed). In the medium P input scenario, average costs vary from £38 to £176 kg P at 10% and 54% P reduction, respectively. The framework demonstrates a useful tool for exploring cost-effective targeting of environmental measures. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  5. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  6. Analysis of pumping tests: Significance of well diameter, partial penetration, and noise

    USGS Publications Warehouse

    Heidari, M.; Ghiassi, K.; Mehnert, E.

    1999-01-01

    The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.

  7. Northeast Artificial Intelligence Consortium (NAIC). Volume 3. The Versatile Maintenance Expert System (VMES)

    DTIC Science & Technology

    1990-12-01

    expected values. However, because the same good /bad output pattern of a device always gives rise to the same initial ordering, the method has its limitation...For any device and good /bad output pattern, it is easy to come up with an example on which the method does poorly in the sense that the actual...submodule is hss likely to be faulty if it is connec d to more good primary outputs. Initially, candidates are ordered according to their relat -nships with

  8. 78 FR 45502 - Certain Oil Country Tubular Goods From India and Turkey: Initiation of Countervailing Duty...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-29

    ... Country Tubular Goods From India and Turkey: Initiation of Countervailing Duty Investigations AGENCY...: July 29, 2013. FOR FURTHER INFORMATION CONTACT: Sean Carey at (202) 482-3964 (India); Shane Subler at... (``OCTG'') from India and the Republic of Turkey (``Turkey''), filed in proper form on behalf of United...

  9. When the Majority Rules: Ballot Initiatives, Race-Conscious Education Policy, and the Public Good

    ERIC Educational Resources Information Center

    Moses, Michele S.; Saenz, Lauren P.

    2012-01-01

    This chapter examines the following central question: How do direct democratic ballot initiatives affect the public good? A second, related question is this: When voters collectively make policy decisions, what responsibilities do researchers have to contribute to informing public deliberation about the relevant issues? In an attempt to answer…

  10. Cost effectiveness of the Oregon quitline "free patch initiative".

    PubMed

    Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John

    2007-12-01

    We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.

  11. Overall Economy

    ERIC Educational Resources Information Center

    Occupational Outlook Quarterly, 2012

    2012-01-01

    The economy's need for workers originates in the demand for the goods and services that these workers provide. So, to project employment, BLS starts by estimating the components of gross domestic product (GDP) for 2020. GDP is the value of the final goods produced and services provided in the United States. Then, BLS estimates the size--in…

  12. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  13. Development of a Tool to Measure Youths' Food Allergy Management Facilitators and Barriers.

    PubMed

    Herbert, Linda Jones; Lin, Adora; Matsui, Elizabeth; Wood, Robert A; Sharma, Hemant

    2016-04-01

    This study's aims are to identify factors related to allergen avoidance and epinephrine carriage among youth with food allergy, develop a tool to measure food allergy management facilitators and barriers, and investigate its initial reliability and validity.  The Food Allergy Management Perceptions Questionnaire (FAMPQ) was developed based on focus groups with 19 adolescents and young adults with food allergy. Additional youth with food allergy (N = 92; ages: 13-21 years) completed food allergy clinical history and management questionnaires and the FAMPQ.  Internal reliability estimates for the FAMPQ Facilitators and Barriers subscales were acceptable to good. Youth who were adherent to allergen avoidance and epinephrine carriage had higher Facilitator scores. Poor adherence was more likely among youth with higher Barrier scores.  Initial FAMPQ reliability and validity is promising. Additional research is needed to develop FAMPQ clinical guidelines. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Shapes of Nonbuoyant Round Luminous Laminar-Jet Diffusion Flames in Coflowing Air. Appendix F

    NASA Technical Reports Server (NTRS)

    Lin, K.-C.; Faeth, G. M.; Urban, David L. (Technical Monitor)

    2000-01-01

    The shapes (luminous flame boundaries) of steady nonbuoyant round luminous hydrocarbon-fueled laminar-jet diffusion flames in coflowing air were studied both experimentally and theoretically. Flame shapes were measured from photographs of flames burning at low pressures in order to minimize the effects of buoyancy. Test conditions involved acetylene-, propylene. and 1,3-butadiene-fueled flames having initial reactant temperatures of 300 K, ambient pressures of 19-50 kPa, jet-exit Reynolds numbers of 18-121, and initial air/fuel velocity ratios of 0.22-32.45 to yield luminous flame lengths of 21-198 mm. The present flames were close to the laminar smoke point but were not soot emitting. Simple expressions to estimate the shapes of nonbuoyant laminar-jet diffusion flames in coflow were found by extending an earlier analysis of Mahalingam et al. These formulas provided a good correlation of present measurements except near the burner exit where self-similar approximations used in the simplified analysis are no longer appropriate.

  15. Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.

    PubMed

    Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J

    2018-01-01

    The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.

  16. Women's work. Maintaining a healthy body weight.

    PubMed

    Welch, Nicky; Hunter, Wendy; Butera, Karina; Willis, Karen; Cleland, Verity; Crawford, David; Ball, Kylie

    2009-08-01

    This study describes women's perceptions of the supports and barriers to maintaining a healthy weight among currently healthy weight women from urban and rural socio-economically disadvantaged areas. Using focus groups and interviews, we asked women about their experiences of maintaining a healthy weight. Overwhelmingly, women described their healthy weight practices in terms of concepts related to work and management. The theme of 'managing health' comprised issues of managing multiple responsibilities, time, and emotions associated with healthy practices. Rural women faced particular difficulties in accessing supports at a practical level (for example, lack of childcare) and due to the gendered roles they enacted in caring for others. Family background (in particular, mothers' attitudes to food and weight) also appeared to influence perceptions about healthy weight maintenance. In the context of global increases in the prevalence of obesity, the value of initiatives aimed at supporting healthy weight women to maintain their weight should not be under-estimated. Such initiatives need to work within the social and personal constraints that women face in maintaining good health.

  17. Hydrological Response of Semi-arid Degraded Catchments in Tigray, Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Teka, Daniel; Van Wesemael, Bas; Vanacker, Veerle; Hallet, Vincent

    2013-04-01

    To address water scarcity in the arid and semi-arid part of developing countries, accurate estimation of surface runoff is an essential task. In semi-arid catchments runoff data are scarce and therefore runoff estimation using hydrological models becomes an alternative. This research was initiated in order to characterize runoff response of semi-arid catchments in Tigray, North Ethiopia to evaluate SCS-CN for various catchments. Ten sub-catchments were selected in different river basins and rainfall and runoff were measured with automatic hydro-monitoring equipments for 2-3 years. The Curve Number was estimated for each Hydrological Response Unit (HRU) in the sub-catchments and runoff was modeled using the SCS-CN method at λ = 0.05 and λ = 0.20. The result showed a significant difference between the two abstraction ratios (P =0.05, df = 1, n= 132) and reasonable good result was obtained for predicted runoff at λ = 0.05 (NSE = -0.69; PBIAS = 18.1%). When using the CN values from literature runoff was overestimated compared to the measured value (e= -11.53). This research showed the importance of using measured runoff data to characterize semi-arid catchments and accurately estimate the scarce water resource. Key words: Hydrological response, rainfall-runoff, degraded environments, semi-arid, Ethiopia, Tigray

  18. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  19. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  20. Quality of Life Among HIV-Infected Patients in Brazil after Initiation of Treatment

    PubMed Central

    Campos, Lorenza Nogueira; César, Cibele Comini; Guimarães, Mark Drew Crosland

    2009-01-01

    INTRODUCTION Despite improvement in clinical treatment for HIV-infected patients, the impact of antiretroviral therapy on the overall quality of life has become a major concern. OBJECTIVE To identify factors associated with increased levels of self-reported quality of life among HIV-infected patients after four months of antiretroviral therapy. METHODS Patients were recruited at two public health referral centers for AIDS, Belo Horizonte, Brazil, for a prospective adherence study. Patients were interviewed before initiating treatment (baseline) and after one and four months. Quality of life was assessed using a psychometric instrument, and factors associated with good/very good quality of life four months after the initiation of antiretroviral therapy were assessed using a cross-sectional approach. Logistic regression was used for analysis. RESULTS Overall quality of life was classified as ‘very good/good’ by 66.4% of the participants four months after initiating treatment, while 33.6% classified it as ‘neither poor nor good/poor/very poor’. Logistic regression indicated that >8 years of education, none/mild symptoms of anxiety and depression, no antiretroviral switch, lower number of adverse reactions and better quality of life at baseline were independently associated with good/very good quality of life over four months of treatment. CONCLUSIONS Our results highlight the importance of modifiable factors such as psychiatric symptoms and treatment-related variables that may contribute to a better quality of life among patients initiating treatment. Considering that poor quality of life is related to non-adherence to antiretroviral therapy, careful clinical monitoring of these factors may contribute to ensuring the long-term effectiveness of antiretroviral regimens. PMID:19759880

  1. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Costing the Australian National Hand Hygiene Initiative.

    PubMed

    Page, K; Barnett, A G; Campbell, M; Brain, D; Martin, E; Fulop, N; Graves, N

    2014-11-01

    The Australian National Hand Hygiene Initiative (NHHI) is a major patient safety programme co-ordinated by Hand Hygiene Australia (HHA) and funded by the Australian Commission for Safety and Quality in Health Care. The annual costs of running this programme need to be understood to know the cost-effectiveness of a decision to sustain it as part of health services. To estimate the annual health services cost of running the NHHI; the set-up costs are excluded. A health services perspective was adopted for the costing and collected data from the 50 largest public hospitals in Australia that implemented the initiative, covering all states and territories. The costs of HHA, the costs to the state-level infection-prevention groups, the costs incurred by each acute hospital, and the costs for additional alcohol-based hand rub are all included. The programme cost AU$5.56 million each year (US$5.76, £3.63 million). Most of the cost is incurred at the hospital level (65%) and arose from the extra time taken for auditing hand hygiene compliance and doing education and training. On average, each infection control practitioner spent 5h per week on the NHHI, and the running cost per annum to their hospital was approximately AU$120,000 in 2012 (US$124,000, £78,000). Good estimates of the total costs of this programme are fundamental to understanding the cost-effectiveness of implementing the NHHI. This paper reports transparent costing methods, and the results include their uncertainty. Copyright © 2014 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  3. Hyperkalemia After Initiating Renin-Angiotensin System Blockade: The Stockholm Creatinine Measurements (SCREAM) Project.

    PubMed

    Bandak, Ghassan; Sang, Yingying; Gasparini, Alessandro; Chang, Alex R; Ballew, Shoshana H; Evans, Marie; Arnlov, Johan; Lund, Lars H; Inker, Lesley A; Coresh, Josef; Carrero, Juan-Jesus; Grams, Morgan E

    2017-07-19

    Concerns about hyperkalemia limit the use of angiotensin-converting enzyme inhibitors (ACE-I) and angiotensin receptor blockers (ARBs), but guidelines conflict regarding potassium-monitoring protocols. We quantified hyperkalemia monitoring and risks after ACE-I/ARB initiation and developed and validated a hyperkalemia susceptibility score. We evaluated 69 426 new users of ACE-I/ARB therapy in the Stockholm Creatinine Measurements (SCREAM) project with medication initiation from January 1, 2007 to December 31, 2010, and follow-up for 1 year thereafter. Three fourths (76%) of SCREAM patients had potassium checked within the first year. Potassium >5 and >5.5 mmol/L occurred in 5.6% and 1.7%, respectively. As a comparison, we propensity-matched new ACE-I/ARB users to 20 186 new β-blocker users in SCREAM: 64% had potassium checked. The occurrence of elevated potassium levels was similar between new β-blocker and ACE-I/ARB users without kidney disease; only at estimated glomerular filtration rate <60 mL/min per 1.73 m 2 were risks higher among ACE-I/ARB users. We developed a hyperkalemia susceptibility score that incorporated estimated glomerular filtration rate, baseline potassium level, sex, diabetes mellitus, heart failure, and the concomitant use of potassium-sparing diuretics in new ACE-I/ARB users; this score accurately predicted 1-year hyperkalemia risk in the SCREAM cohort (area under the curve, 0.845, 95% CI: 0.840-0.869) and in a validation cohort from the US-based Geisinger Health System (N=19 524; area under the curve, 0.818, 95% CI: 0.794-0.841), with good calibration. Hyperkalemia within the first year of ACE-I/ARB therapy was relatively uncommon among people with estimated glomerular filtration rate >60 mL/min per 1.73 m 2 , but rates were much higher with lower estimated glomerular filtration rate. Use of the hyperkalemia susceptibility score may help guide laboratory monitoring and prescribing strategies. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  4. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less

  5. Tomographic phase analysis to detect the site of accessory conduction pathway in Wolff-Parkinson-White syndrome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, K.; Bunko, H.; Tada, A.

    1984-01-01

    Phase analysis has been applied to Wolff-Parkinson-White syndrome (WPW) to detect the site of accessory conduction pathway (ACP); however, there was a limitation to estimate the precise location of ACP by planar phase analysis. In this study, the authors applied phase analysis to gated blood pool tomography. Twelve patients with WPW who underwent epicardial mapping and surgical division of ACP were studied by both of gated emission computed tomography (GECT) and routine gated blood pool study (GBPS). The GBPS was performed with Tc-99m red blood cells in multiple projections; modified left anterior oblique, right anterior oblique and/or left lateral views.more » In GECT, short axial, horizontal and vertical long axial blood pool images were reconstructed. Phase analysis was performed using fundamental frequency of the Fourier transform in both GECT and GBPS images, and abnormal initial contractions on both the planar and tomographic phase analysis were compared with the location of surgically confirmed ACPs. In planar phase analysis, abnormal initial phase was identified in 7 out of 12 (58%) patients, while in tomographic phase analysis, the localization of ACP was predicted in 11 out of 12 (92%) patients. Tomographic phase analysis is superior to planar phase images in 8 out of 12 patients to estimate the location of ACP. Phase analysis by GECT can avoid overlap of blood pool in cardiac chambers and has advantage to identify the propagation of phase three-dimensionally. Tomographic phase analysis is a good adjunctive method for patients with WPW to estimate the site of ACP.« less

  6. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  7. Parameter estimation in plasmonic QED

    NASA Astrophysics Data System (ADS)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  8. [Regional integration, population health needs, and human resources for health systems and services: an approach to the concept of health care gap].

    PubMed

    Schweiger, Arturo Luis Francisco; Alvarez, Daniela Teresita

    2007-01-01

    The existence of gaps between the population's health needs and the human resources available for meeting them, as well as limitations in the methods to estimate such needs, constitute key factors to be tackled in the development and integration of health systems in Latin America. This aim of this study was to conduct an initial literature review on the tools and procedures used to estimate and plan human resources allocation in health and to use this review as the basis for identifying the advantages, limitations, and complementary characteristics of these tools, subsequently proposing the need for more in-depth studies on their applicability for designing regional health policies. The article then presents the concept of global public health goods, the generation and use of which results in a strategic alternative for improving both health systems integration in the region and quality of life for the population covered by such services.

  9. Evaluation and optimization of a commercial blocking ELISA for detecting antibodies to influenza A virus for research and surveillance of mallards.

    PubMed

    Shriner, Susan A; VanDalen, Kaci K; Root, J Jeffrey; Sullivan, Heather J

    2016-02-01

    The availability of a validated commercial assay is an asset for any wildlife investigation. However, commercial products are often developed for use in livestock and are not optimized for wildlife. Consequently, it is incumbent upon researchers and managers to apply commercial products appropriately to optimize program outcomes. We tested more than 800 serum samples from mallards for antibodies to influenza A virus with the IDEXX AI MultiS-Screen Ab test to evaluate assay performance. Applying the test per manufacturer's recommendations resulted in good performance with 84% sensitivity and 100% specificity. However, performance was improved to 98% sensitivity and 98% specificity by increasing the recommended cut-off. Using this alternative threshold for identifying positive and negative samples would greatly improve sample classification, especially for field samples collected months after infection when antibody titers have waned from the initial primary immune response. Furthermore, a threshold that balances sensitivity and specificity reduces estimation bias in seroprevalence estimates. Published by Elsevier B.V.

  10. Fatigue Analysis of Rotating Parts. A Case Study for a Belt Driven Pulley

    NASA Astrophysics Data System (ADS)

    Sandu, Ionela; Tabacu, Stefan; Ducu, Catalin

    2017-10-01

    The present study is focused on the life estimation of a rotating part as a component of an engine assembly namely the pulley of the coolant pump. The goal of the paper is to develop a model, supported by numerical analysis, capable to predict the lifetime of the part. Starting from functional drawing, CAD Model and technical specifications of the part a numerical model was developed. MATLAB code was used to develop a tool to apply the load over the selected area. The numerical analysis was performed in two steps. The first simulation concerned the inertia relief due to rotational motion about the shaft (of the pump). Results from this simulation were saved and the stress - strain state used as initial conditions for the analysis with the load applied. The lifetime of a good part was estimated. A defect was created in order to investigate the influence over the working requirements. It was found that there is little influence with respect to the prescribed lifetime.

  11. Bolus intrathecal injection of ziconotide (Prialt®) to evaluate the option of continuous administration via an implanted intrathecal drug delivery (ITDD) system: a pilot study.

    PubMed

    Mohammed, Salma I; Eldabe, Sam; Simpson, Karen H; Brookes, Morag; Madzinga, Grace; Gulve, Ashish; Baranidharan, Ganesan; Radford, Helen; Crowther, Tracey; Buchser, Eric; Perruchoud, Christophe; Batterham, Alan Mark

    2013-01-01

    This study evaluated efficacy and safety of bolus doses of ziconotide (Prialt®, Eisai Limited, Hertfordshire, UK) to assess the option of continuous administration of this drug via an implanted intrathecal drug delivery system. Twenty adults with severe chronic pain who were under consideration for intrathecal (IT) therapy were enrolled in this open label, nonrandomized, pilot study. Informed consent was obtained. Demographics, medical/pain history, pain scores, and concomitant medications were recorded. A physical examination was performed. Creatine kinase was measured. Initial visual analog scale (VAS), blood pressure, heart rate, and respiratory rate were recorded. All patients received an initial bolus dose of 2.5 mcg ziconotide; the dose in the subsequent visits was modified according to response. Subsequent doses were 2.5 mcg, 1.2 mcg, or 3.75 mcg as per protocol. A good response (≥30% reduction in baseline pain VAS) with no side-effects on two occasions was considered a successful trial. Data were analyzed using a generalized estimating equations model, with pain VAS as the outcome and time (seven time points; preinjection and one to six hours postinjection) as the predictor. Generalized estimating equations analysis of summary measures showed a mean reduction of pain VAS of approximately 25% at the group level; of 11 responders, seven underwent pump implantation procedure, two withdrew because of adverse effects, one refused an implant, and one could not have an implant (lack of funding from the Primary Care Trust). Our data demonstrated that mean VAS was reduced by approximately 25% at the group level after IT ziconotide bolus. Treatment efficacy did not vary with sex, center, age, or pain etiology. Ziconotide bolus was generally well tolerated. Larger studies are needed to determine if bolus dosing with ziconotide is a good predictor of response to continuous IT ziconotide via an intrathecal drug delivery system. © 2012 International Neuromodulation Society.

  12. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  13. Efficacy of prophylactic cranial irradiation in patients with limited-disease small-cell lung cancer who were confirmed to have no brain metastasis via magnetic resonance imaging after initial chemoradiotherapy.

    PubMed

    Mamesaya, Nobuaki; Wakuda, Kazushige; Omae, Katsuhiro; Miyawaki, Eriko; Kotake, Mie; Fujiwara, Takumi; Kawamura, Takahisa; Kobayashi, Haruki; Nakashima, Kazuhisa; Omori, Shota; Ono, Akira; Kenmotsu, Hirotsugu; Naito, Tateaki; Murakami, Haruyasu; Mori, Keita; Harada, Hideyuki; Endo, Masahiro; Nakajima, Takashi; Takahashi, Toshiaki

    2018-04-03

    Prophylactic cranial irradiation (PCI) is recommended for patients with limited-disease small-cell lung cancer (LD-SCLC) who achieved good response to definitive chemoradiotherapy. However, most clinical studies lacked brain imaging scans before PCI. Our study aimed to investigate whether PCI has a survival benefit in patients who have no brain metastases (BM) confirmed via magnetic resonance imaging (MRI) before PCI. Eighty patients were included in this study. Sixty patients received PCI (PCI group) and 20 patients did not (non-PCI group). OS was not significantly different between the two groups. The median OS time was 4.3 years (95% CI: 2.6 years-8.6 years) in the PCI group and was not reached (NR) (95% CI: 1.9 years-NR) in the non-PCI group ( p = 0.542). Moreover, no differences were observed in the 3-year rates of PFS (46.2% and 44.4%, p = 0.720) and cumulative incidence of BM (24.0% vs. 27%, p = 0.404). Our result suggests that PCI may not have a survival benefit in patients with LD-SCLC confirmed to have no BM after initial therapy, even if patients achieve a good response to definitive chemoradiotherapy. We retrospectively evaluated patients with LD-SCLC who were confirmed to have no BM via MRI after initial chemoradiotherapy at the Shizuoka Cancer Center between September 2002 and August 2015. The overall survival (OS), progression-free survival (PFS), and cumulative incidence of BM were estimated using the Kaplan-Meier method between patients who received PCI and those who did not. Propensity score matching was used to balance baseline characteristics.

  14. Efficacy of prophylactic cranial irradiation in patients with limited-disease small-cell lung cancer who were confirmed to have no brain metastasis via magnetic resonance imaging after initial chemoradiotherapy

    PubMed Central

    Mamesaya, Nobuaki; Wakuda, Kazushige; Omae, Katsuhiro; Miyawaki, Eriko; Kotake, Mie; Fujiwara, Takumi; Kawamura, Takahisa; Kobayashi, Haruki; Nakashima, Kazuhisa; Omori, Shota; Ono, Akira; Kenmotsu, Hirotsugu; Naito, Tateaki; Murakami, Haruyasu; Mori, Keita; Harada, Hideyuki; Endo, Masahiro; Nakajima, Takashi; Takahashi, Toshiaki

    2018-01-01

    Background Prophylactic cranial irradiation (PCI) is recommended for patients with limited-disease small-cell lung cancer (LD-SCLC) who achieved good response to definitive chemoradiotherapy. However, most clinical studies lacked brain imaging scans before PCI. Our study aimed to investigate whether PCI has a survival benefit in patients who have no brain metastases (BM) confirmed via magnetic resonance imaging (MRI) before PCI. Results Eighty patients were included in this study. Sixty patients received PCI (PCI group) and 20 patients did not (non-PCI group). OS was not significantly different between the two groups. The median OS time was 4.3 years (95% CI: 2.6 years–8.6 years) in the PCI group and was not reached (NR) (95% CI: 1.9 years–NR) in the non-PCI group (p = 0.542). Moreover, no differences were observed in the 3-year rates of PFS (46.2% and 44.4%, p = 0.720) and cumulative incidence of BM (24.0% vs. 27%, p = 0.404). Conclusions Our result suggests that PCI may not have a survival benefit in patients with LD-SCLC confirmed to have no BM after initial therapy, even if patients achieve a good response to definitive chemoradiotherapy. Patients and Methods We retrospectively evaluated patients with LD-SCLC who were confirmed to have no BM via MRI after initial chemoradiotherapy at the Shizuoka Cancer Center between September 2002 and August 2015. The overall survival (OS), progression-free survival (PFS), and cumulative incidence of BM were estimated using the Kaplan–Meier method between patients who received PCI and those who did not. Propensity score matching was used to balance baseline characteristics. PMID:29707139

  15. Optimization of Maillard Reaction in Model System of Glucosamine and Cysteine Using Response Surface Methodology

    PubMed Central

    Arachchi, Shanika Jeewantha Thewarapperuma; Kim, Ye-Joo; Kim, Dae-Wook; Oh, Sang-Chul; Lee, Yang-Bong

    2017-01-01

    Sulfur-containing amino acids play important roles in good flavor generation in Maillard reaction of non-enzymatic browning, so aqueous model systems of glucosamine and cysteine were studied to investigate the effects of reaction temperature, initial pH, reaction time, and concentration ratio of glucosamine and cysteine. Response surface methodology was applied to optimize the independent reaction parameters of cysteine and glucosamine in Maillard reaction. Box-Behnken factorial design was used with 30 runs of 16 factorial levels, 8 axial levels and 6 central levels. The degree of Maillard reaction was determined by reading absorption at 425 nm in a spectrophotometer and Hunter’s L, a, and b values. ΔE was consequently set as the fifth response factor. In the statistical analyses, determination coefficients (R2) for their absorbance, Hunter’s L, a, b values, and ΔE were 0.94, 0.79, 0.73, 0.96, and 0.79, respectively, showing that the absorbance and Hunter’s b value were good dependent variables for this model system. The optimum processing parameters were determined to yield glucosamine-cysteine Maillard reaction product with higher absorbance and higher colour change. The optimum estimated absorbance was achieved at the condition of initial pH 8.0, 111°C reaction temperature, 2.47 h reaction time, and 1.30 concentration ratio. The optimum condition for colour change measured by Hunter’s b value was 2.41 h reaction time, 114°C reaction temperature, initial pH 8.3, and 1.26 concentration ratio. These results can provide the basic information for Maillard reaction of aqueous model system between glucosamine and cysteine. PMID:28401086

  16. Optimization of Maillard Reaction in Model System of Glucosamine and Cysteine Using Response Surface Methodology.

    PubMed

    Arachchi, Shanika Jeewantha Thewarapperuma; Kim, Ye-Joo; Kim, Dae-Wook; Oh, Sang-Chul; Lee, Yang-Bong

    2017-03-01

    Sulfur-containing amino acids play important roles in good flavor generation in Maillard reaction of non-enzymatic browning, so aqueous model systems of glucosamine and cysteine were studied to investigate the effects of reaction temperature, initial pH, reaction time, and concentration ratio of glucosamine and cysteine. Response surface methodology was applied to optimize the independent reaction parameters of cysteine and glucosamine in Maillard reaction. Box-Behnken factorial design was used with 30 runs of 16 factorial levels, 8 axial levels and 6 central levels. The degree of Maillard reaction was determined by reading absorption at 425 nm in a spectrophotometer and Hunter's L, a, and b values. ΔE was consequently set as the fifth response factor. In the statistical analyses, determination coefficients (R 2 ) for their absorbance, Hunter's L, a, b values, and ΔE were 0.94, 0.79, 0.73, 0.96, and 0.79, respectively, showing that the absorbance and Hunter's b value were good dependent variables for this model system. The optimum processing parameters were determined to yield glucosamine-cysteine Maillard reaction product with higher absorbance and higher colour change. The optimum estimated absorbance was achieved at the condition of initial pH 8.0, 111°C reaction temperature, 2.47 h reaction time, and 1.30 concentration ratio. The optimum condition for colour change measured by Hunter's b value was 2.41 h reaction time, 114°C reaction temperature, initial pH 8.3, and 1.26 concentration ratio. These results can provide the basic information for Maillard reaction of aqueous model system between glucosamine and cysteine.

  17. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  18. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  19. 19 CFR 181.75 - Issuance of origin determination.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... origin verification initiated under § 181.72(a) of this part in regard to a good imported into the United... the origin verification, Customs shall provide the exporter or producer whose good is the subject of the verification with a written determination of whether the good qualifies as an originating good...

  20. A new hydrological model for estimating extreme floods in the Alps

    NASA Astrophysics Data System (ADS)

    Receanu, R. G.; Hertig, J.-A.; Fallot, J.-M.

    2012-04-01

    Protection against flooding is very important for a country like Switzerland with a varied topography and many rivers and lakes. Because of the potential danger caused by extreme precipitation, structural and functional safety of large dams must be guaranteed to withstand the passage of an extreme flood. We introduce a new distributed hydrological model to calculate the PMF from a PMP which is spatially and temporally distributed using clouds. This model has permitted the estimation of extreme floods based on the distributed PMP and the taking into account of the specifics of alpine catchments, in particular the small size of the basins, the complex topography, the large lakes, snowmelt and glaciers. This is an important evolution compared to other models described in the literature, as they mainly use a uniform distribution of extreme precipitation all over the watershed. This paper presents the results of calculation with the developed rainfall-runoff model, taking into account measured rainfall and comparing results to observed flood events. This model includes three parts: surface runoff, underground flow and melting snow. Two Swiss watersheds are studied, for which rainfall data and flow rates are available for a considerably long period, including several episodes of heavy rainfall with high flow events. From these events, several simulations are performed to estimate the input model parameters such as soil roughness and average width of rivers in case of surface runoff. Following the same procedure, the parameters used in the underground flow simulation are also estimated indirectly, since direct underground flow and exfiltration measurements are difficult to obtain. A sensitivity analysis of the parameters is performed at the first step to define more precisely the boundary and initial conditions. The results for the two alpine basins, validated with the Nash equation, show a good correlation between the simulated and observed flows. This good correlation shows that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type.

  1. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  2. Nonlinear observation of internal states of fuel cell cathode utilizing a high-order sliding-mode algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Liangfei; Hu, Junming; Cheng, Siliang; Fang, Chuan; Li, Jianqiu; Ouyang, Minggao; Lehnert, Werner

    2017-07-01

    A scheme for designing a second-order sliding-mode (SOSM) observer that estimates critical internal states on the cathode side of a polymer electrolyte membrane (PEM) fuel cell system is presented. A nonlinear, isothermal dynamic model for the cathode side and a membrane electrolyte assembly are first described. A nonlinear observer topology based on an SOSM algorithm is then introduced, and equations for the SOSM observer deduced. Online calculation of the inverse matrix produces numerical errors, so a modified matrix is introduced to eliminate the negative effects of these on the observer. The simulation results indicate that the SOSM observer performs well for the gas partial pressures and air stoichiometry. The estimation results follow the simulated values in the model with relative errors within ± 2% at stable status. Large errors occur during the fast dynamic processes (<1 s). Moreover, the nonlinear observer shows good robustness against variations in the initial values of the internal states, but less robustness against variations in system parameters. The partial pressures are more sensitive than the air stoichiometry to system parameters. Finally, the order of effects of parameter uncertainties on the estimation results is outlined and analyzed.

  3. The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women

    NASA Astrophysics Data System (ADS)

    Adekanmbi, D. B.; Bamiduro, T. A.

    2006-10-01

    The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.

  4. Effects of a supplemented hypoproteic diet in chronic kidney disease.

    PubMed

    Mircescu, Gabriel; Gârneaţă, Liliana; Stancu, Simona Hildegard; Căpuşă, Cristina

    2007-05-01

    We assessed the effect of a severe hypoproteic diet supplemented with ketoanalogues (SVLPD) for 48 weeks on certain metabolic disorders of chronic kidney disease (CKD). We performed a prospective, open-label, parallel, randomized, controlled trial. The study took place in the Nephrology Department at the Dr Carol Davila Teaching Hospital of Nephrology, Bucharest, Romania. A total of 53 nondiabetic patients with CKD with an estimated glomerular filtration rate less than 30 mL/min/1.73 m(2) (Modification of Diet in Renal Disease formula), proteinuria less than 1 g/g urinary creatinine, good nutritional status, and anticipated good compliance with the diet were randomly assigned to two groups. Group I (n = 27) received the SVLPD (0.3 g/kg/d of vegetable proteins and ketoanalogues, 1 capsule for every 5 kg of ideal body weight per day). Group II (n = 26) continued a conventional low mixed protein diet (0.6 g/kg/d). Nitrogen waste products retention and calcium-phosphorus and acid-base disturbances were primary efficacy parameters, and "death" of the kidney or the patient and the estimated glomerular filtration rate were secondary efficacy parameters. The nutritional status and compliance with the diet were predefined as safety variables. There were no differences between groups in any parameter at baseline. In the SVLPD group, serum urea significantly decreased (56 +/- 7.9 mmol/L vs. 43.2 +/- 10 mmol/L), and significant improvements in serum bicarbonate (23.4 +/- 2.1 mmol/L vs. 18.1 +/- 1.5 mmol/L), serum calcium (1.10 +/- 0.17 mmol/L vs. 1.00 +/- 0.15 mmol/L at baseline), serum phosphates (1.45 +/- 0.66 mmol/L vs. 1.91 +/- 0.68 mmol/L), and calcium-phosphorus product (1.59 +/- 0.11 mmol(2)/L(2) vs. 1.91 +/- 0.10 mmol(2)/L(2)) were noted after 48 weeks. No death was registered in any group. Significantly lower percentages of patients in group I required renal replacement therapy initiation (4% vs. 27%). After 48 weeks, estimated glomerular filtration rate did not significantly change in patients receiving SVLPD (0.26 +/- 0.08 mL/s vs. 0.31 +/- 0.08 mL/s at baseline), but significantly decreased in controls (0.22 +/- 0.09 mL/s vs. 0.30 +/- 0.07 mL/s). The compliance with the keto-diet was good in enrolled patients. No significant changes in any of the parameters of the nutritional status and no adverse reactions were noted. SVLPD seems to ameliorate the nitrogen waste products retention and acid-base and calcium-phosphorus metabolism disturbances and to postpone the renal replacement therapy initiation, preserving the nutritional status in patients with CKD.

  5. Hybrid diversity method utilizing adaptive diversity function for recovering unknown aberrations in an optical system

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2009-01-01

    A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.

  6. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  7. Numerical study of viscous dissipation during single drop impact on wetted surfaces

    NASA Astrophysics Data System (ADS)

    An, Yi; Yang, Shihao; Liu, Qingquan

    2017-11-01

    The splashing crown by the impact of a drop on a liquid film has been studied extensively since Yarin and Weiss (JFM 1995). The motion of the crown base is believed to be kinematic which results in the equation R =(2/3H)1/4(T-T0)1/2. This equation is believed to overestimate the crown size by about 15%. While Trojillo and Lee (PoF 2001) find the influence of the Re not notable. Considering the dissipation in the initial stage of the impact, Gao and Li (PRE, 2015) obtained a well-validated equation. However, how to estimate the dissipation is still worth some detailed discussion. We carried out a series of VOF simulations with special focusing on the influence of viscosity. The simulation is based on the Basilisk code to utilize adaptive mesh refinement. We found that the role of dissipation could be divided into three stages. When T> 1, the commonly used shallow water equation provides a good approximation while the initial condition should be considered properly. Between this two stages, the viscous dissipation is the governing factor and thus causes inaccurate estimation of the crown base motion in the third stage. This work was financially supported by the National Natural Science Foundation of China (No. 11672310, No. 11372326).

  8. Construction of a memory battery for computerized administration, using item response theory.

    PubMed

    Ferreira, Aristides I; Almeida, Leandro S; Prieto, Gerardo

    2012-10-01

    In accordance with Item Response Theory, a computer memory battery with six tests was constructed for use in the Portuguese adult population. A factor analysis was conducted to assess the internal structure of the tests (N = 547 undergraduate students). According to the literature, several confirmatory factor models were evaluated. Results showed better fit of a model with two independent latent variables corresponding to verbal and non-verbal factors, reproducing the initial battery organization. Internal consistency reliability for the six tests were alpha = .72 to .89. IRT analyses (Rasch and partial credit models) yielded good Infit and Outfit measures and high precision for parameter estimation. The potential utility of these memory tasks for psychological research and practice willbe discussed.

  9. Multispectral processing based on groups of resolution elements

    NASA Technical Reports Server (NTRS)

    Richardson, W.; Gleason, J. M.

    1975-01-01

    Several nine-point rules are defined and compared with previously studied rules. One of the rules performed well in boundary areas, but with reduced efficiency in field interiors; another combined best performance on field interiors with good sensitivity to boundary detail. The basic threshold gradient and some modifications were investigated as a means of boundary point detection. The hypothesis testing methods of closed-boundary formation were also tested and evaluated. An analysis of the boundary detection problem was initiated, employing statistical signal detection and parameter estimation techniques to analyze various formulations of the problem. These formulations permit the atmospheric and sensor system effects on the data to be thoroughly analyzed. Various boundary features and necessary assumptions can also be investigated in this manner.

  10. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  11. Noninvasive reconstruction of the three-dimensional ventricular activation sequence during pacing and ventricular tachycardia in the canine heart.

    PubMed

    Han, Chengzong; Pogwizd, Steven M; Killingsworth, Cheryl R; He, Bin

    2012-01-01

    Single-beat imaging of myocardial activation promises to aid in both cardiovascular research and clinical medicine. In the present study we validate a three-dimensional (3D) cardiac electrical imaging (3DCEI) technique with the aid of simultaneous 3D intracardiac mapping to assess its capability to localize endocardial and epicardial initiation sites and image global activation sequences during pacing and ventricular tachycardia (VT) in the canine heart. Body surface potentials were measured simultaneously with bipolar electrical recordings in a closed-chest condition in healthy canines. Computed tomography images were obtained after the mapping study to construct realistic geometry models. Data analysis was performed on paced rhythms and VTs induced by norepinephrine (NE). The noninvasively reconstructed activation sequence was in good agreement with the simultaneous measurements from 3D cardiac mapping with a correlation coefficient of 0.74 ± 0.06, a relative error of 0.29 ± 0.05, and a root mean square error of 9 ± 3 ms averaged over 460 paced beats and 96 ectopic beats including premature ventricular complexes, couplets, and nonsustained monomorphic VTs and polymorphic VTs. Endocardial and epicardial origins of paced beats were successfully predicted in 72% and 86% of cases, respectively, during left ventricular pacing. The NE-induced ectopic beats initiated in the subendocardium by a focal mechanism. Sites of initial activation were estimated to be ∼7 mm from the measured initiation sites for both the paced beats and ectopic beats. For the polymorphic VTs, beat-to-beat dynamic shifts of initiation site and activation pattern were characterized by the reconstruction. The present results suggest that 3DCEI can noninvasively image the 3D activation sequence and localize the origin of activation of paced beats and NE-induced VTs in the canine heart with good accuracy. This 3DCEI technique offers the potential to aid interventional therapeutic procedures for treating ventricular arrhythmias arising from epicardial or endocardial sites and to noninvasively assess the mechanisms of these arrhythmias.

  12. Noninvasive reconstruction of the three-dimensional ventricular activation sequence during pacing and ventricular tachycardia in the canine heart

    PubMed Central

    Han, Chengzong; Pogwizd, Steven M.; Killingsworth, Cheryl R.

    2012-01-01

    Single-beat imaging of myocardial activation promises to aid in both cardiovascular research and clinical medicine. In the present study we validate a three-dimensional (3D) cardiac electrical imaging (3DCEI) technique with the aid of simultaneous 3D intracardiac mapping to assess its capability to localize endocardial and epicardial initiation sites and image global activation sequences during pacing and ventricular tachycardia (VT) in the canine heart. Body surface potentials were measured simultaneously with bipolar electrical recordings in a closed-chest condition in healthy canines. Computed tomography images were obtained after the mapping study to construct realistic geometry models. Data analysis was performed on paced rhythms and VTs induced by norepinephrine (NE). The noninvasively reconstructed activation sequence was in good agreement with the simultaneous measurements from 3D cardiac mapping with a correlation coefficient of 0.74 ± 0.06, a relative error of 0.29 ± 0.05, and a root mean square error of 9 ± 3 ms averaged over 460 paced beats and 96 ectopic beats including premature ventricular complexes, couplets, and nonsustained monomorphic VTs and polymorphic VTs. Endocardial and epicardial origins of paced beats were successfully predicted in 72% and 86% of cases, respectively, during left ventricular pacing. The NE-induced ectopic beats initiated in the subendocardium by a focal mechanism. Sites of initial activation were estimated to be ∼7 mm from the measured initiation sites for both the paced beats and ectopic beats. For the polymorphic VTs, beat-to-beat dynamic shifts of initiation site and activation pattern were characterized by the reconstruction. The present results suggest that 3DCEI can noninvasively image the 3D activation sequence and localize the origin of activation of paced beats and NE-induced VTs in the canine heart with good accuracy. This 3DCEI technique offers the potential to aid interventional therapeutic procedures for treating ventricular arrhythmias arising from epicardial or endocardial sites and to noninvasively assess the mechanisms of these arrhythmias. PMID:21984548

  13. Designing Stories for Educational Video Games: Analysis and Evaluation

    ERIC Educational Resources Information Center

    López-Arcos, J. R.; Padilla-Zea, N.; Paderewski, P.; Gutiérrez, F. L.

    2017-01-01

    The use of video games as an educational tool initially causes a higher degree of motivation in students. However, the inclusion of educational activities throughout the game can cause this initial interest to be lost. A good way to maintain motivation is to use a good story that is used as guiding thread with which to contextualize the other…

  14. Co1 DNA supports conspecificity of Geomyphilus pierai and G. barrerai (Coleoptera, Scarabaeidae, Aphodiinae) and is a good marker for their phylogeographic investigation in Mexican mountains

    PubMed Central

    Arriaga-Jiménez, Alfonsina; Roy, Lise

    2015-01-01

    Abstract Members of Geomyphilus are associated with rodent burrows, such as pocket gophers and prairie dogs. In Mexico, they are found in the mountains of the Mexican Volcanic Belt and in Sierra Madre Oriental. Our study aims to initiate the exploration of the dispersal modes of Geomyphilus pierai and Geomyphilus barrerai from burrows of pocket gophers. In order to estimate the dispersal scale of the beetles, the utility of mitochondrial and nuclear molecular markers for studying the phylogeographic structure of this complex of species (Geomyphilus pierai and Geomyphilus barrerai) was tested from 49 beetle individuals. High intraspecific and intra-mountain nucleotidic diversity was captured from this sample using Co1 mitochondrial sequences, whilst the ITS2 nuclear ribosomal sequence did not allow observing informative variation. Mitochondrial phylogenetic analysis revealed that the specific delineation between the two species under study was doubtful. In this preliminary study, Co1 was shown to be a good marker for elucidating dispersal routes of the burrowing rodent-associated beetles. PMID:26257561

  15. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  16. Evaluation of measles and rubella integrated surveillance system in Apulia region, Italy, 3 years after its introduction.

    PubMed

    Turiac, I A; Fortunato, F; Cappelli, M G; Morea, A; Chironna, M; Prato, Rosa; Martinelli, D

    2018-04-01

    This study aimed at evaluating the integrated measles and rubella surveillance system (IMRSS) in Apulia region, Italy, from its introduction in 2013 to 30 June 2016. Measles and rubella case reports were extracted from IMRSS. We estimated system sensitivity at the level of case reporting, using the capture-recapture method for three data sources. Data quality was described as the completeness of variables and timeliness of notification as the median-time interval from symptoms onset to initial alert. The proportion of suspected cases with laboratory investigation, the rate of discarded cases and the origin of infection were also computed. A total of 127 measles and four rubella suspected cases were reported to IMRSS and 82 were laboratory confirmed. Focusing our analysis on measles, IMRSS sensitivity was 82% (95% CI: 75-87). Completeness was >98% for mandatory variables and 57% for 'genotyping'. The median-time interval from symptoms onset to initial alert was 4.5 days, with a timeliness of notification of 33% (41 cases reported ⩽48 h). The proportion of laboratory investigation was 87%. The rate of discarded cases was 0.1 per 100 000 inhabitants per year. The origin of infection was identified for 85% of cases. It is concluded that IMRSS provides good quality data and has good sensitivity; still efforts should be made to improve the completeness of laboratory-related variables, timeliness and to increase the rate of discarded cases.

  17. Technical Excellence: A Requirement for Good Engineering

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Vaughan, William W.

    2008-01-01

    Technical excellence is a requirement for good engineering. Technical excellence has many different ways of expressing itself within engineering. NASA has initiatives that address the enhancement of the Agency's technical excellence and thrust to maintain the associated high level of performance by the Agency on current programs/projects and as it moves into the Constellation Program and the return to the Moon with plans to visit Mars. This paper addresses some of the key initiatives associated with NASA's technical excellence thrust. Examples are provided to illustrate some results being achieved and plans to enhance these initiatives.

  18. Early adolescent adversity inflates threat estimation in females and promotes alcohol use initiation in both sexes.

    PubMed

    Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A

    2018-06-01

    Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Which response format reveals the truth about donations to a public good?

    Treesearch

    Thomas C. Brown; Patricia A. Champ; Richard C. Bishop; Daniel W. McCollum

    1996-01-01

    Seceral contingent valuation studies hace found that the open-ended format yields lower estimates of willingness to pay (WTP) than does the closed-ended, or dichotomous choice, format. In this study, WTP for a public encironmental good was estimated under four conditions: actual payment in response to open-ended and closed-ended requests, and hypothetical payment in...

  20. Deuterium fractionation and H2D+ evolution in turbulent and magnetized cloud cores

    NASA Astrophysics Data System (ADS)

    Körtgen, Bastian; Bovino, Stefano; Schleicher, Dominik R. G.; Giannetti, Andrea; Banerjee, Robi

    2017-08-01

    High-mass stars are expected to form from dense prestellar cores. Their precise formation conditions are widely discussed, including their virial condition, which results in slow collapse for supervirial cores with strong support by turbulence or magnetic fields, or fast collapse for subvirial sources. To disentangle their formation processes, measurements of the deuterium fractions are frequently employed to approximately estimate the ages of these cores and to obtain constraints on their dynamical evolution. We here present 3D magnetohydrodynamical simulations including for the first time an accurate non-equilibrium chemical network with 21 gas-phase species plus dust grains and 213 reactions. With this network we model the deuteration process in fully depleted prestellar cores in great detail and determine its response to variations in the initial conditions. We explore the dependence on the initial gas column density, the turbulent Mach number, the mass-to-magnetic flux ratio and the distribution of the magnetic field, as well as the initial ortho-to-para ratio (OPR) of H2. We find qualitatively good agreement with recent observations of deuterium fractions in quiescent sources. Our results show that deuteration is rather efficient, even when assuming a conservative OPR of 3 and highly subvirial initial conditions, leading to large deuterium fractions already within roughly a free-fall time. We discuss the implications of our results and give an outlook to relevant future investigations.

  1. Use of a macroinvertebrate based biotic index to estimate critical metal concentrations for good ecological water quality.

    PubMed

    Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven

    2015-01-01

    Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. 77 FR 98 - Commission Information Collection Activities, Proposed Collection (FERC-716); Comment Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-03

    ... requirements of FERC-716 (``Good Faith Request for Transmission Service and Response by Transmitting Utility..., provide standards by which the Commission determines if and when a valid good faith request for... 12 components of a good faith estimate and 5 components of a reply to a good faith request. Action...

  3. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.

  4. Deciphering the enigma of undetected species, phylogenetic, and functional diversity based on Good-Turing theory.

    PubMed

    Chao, Anne; Chiu, Chun-Huo; Colwell, Robert K; Magnago, Luiz Fernando S; Chazdon, Robin L; Gotelli, Nicholas J

    2017-11-01

    Estimating the species, phylogenetic, and functional diversity of a community is challenging because rare species are often undetected, even with intensive sampling. The Good-Turing frequency formula, originally developed for cryptography, estimates in an ecological context the true frequencies of rare species in a single assemblage based on an incomplete sample of individuals. Until now, this formula has never been used to estimate undetected species, phylogenetic, and functional diversity. Here, we first generalize the Good-Turing formula to incomplete sampling of two assemblages. The original formula and its two-assemblage generalization provide a novel and unified approach to notation, terminology, and estimation of undetected biological diversity. For species richness, the Good-Turing framework offers an intuitive way to derive the non-parametric estimators of the undetected species richness in a single assemblage, and of the undetected species shared between two assemblages. For phylogenetic diversity, the unified approach leads to an estimator of the undetected Faith's phylogenetic diversity (PD, the total length of undetected branches of a phylogenetic tree connecting all species), as well as a new estimator of undetected PD shared between two phylogenetic trees. For functional diversity based on species traits, the unified approach yields a new estimator of undetected Walker et al.'s functional attribute diversity (FAD, the total species-pairwise functional distance) in a single assemblage, as well as a new estimator of undetected FAD shared between two assemblages. Although some of the resulting estimators have been previously published (but derived with traditional mathematical inequalities), all taxonomic, phylogenetic, and functional diversity estimators are now derived under the same framework. All the derived estimators are theoretically lower bounds of the corresponding undetected diversities; our approach reveals the sufficient conditions under which the estimators are nearly unbiased, thus offering new insights. Simulation results are reported to numerically verify the performance of the derived estimators. We illustrate all estimators and assess their sampling uncertainty with an empirical dataset for Brazilian rain forest trees. These estimators should be widely applicable to many current problems in ecology, such as the effects of climate change on spatial and temporal beta diversity and the contribution of trait diversity to ecosystem multi-functionality. © 2017 by the Ecological Society of America.

  5. Age model for a continuous, ca 250-ka Quaternary lacustrine record from Bear Lake, Utah-Idaho

    USGS Publications Warehouse

    Colman, Steven M.; Kaufman, D.S.; Bright, Jordon; Heil, C.; King, J.W.; Dean, W.E.; Rosenbaum, J.G.; Forester, R.M.; Bischoff, J.L.; Perkins, Marie; McGeehin, J.P.

    2006-01-01

    The Quaternary sediments sampled by continuous 120-m-long drill cores from Bear Lake (Utah-Idaho) comprise one of the longest lacustrine sequences recovered from an extant lake. The cores serve as a good case study for the construction of an age model for sequences that extend beyond the range of radiocarbon dating. From a variety of potential age indicators, we selected a combination of radiocarbon ages, one magnetic excursion (correlated to a standard sequence), and a single Uranium-series age to develop an initial data set. The reliability of the excursion and U-series data require consideration of their position with respect to sediments of inferred interglacial character, but not direct correlation with other paleoclimate records. Data omitted from the age model include amino acid age estimates, which have a large amount of scatter, and tephrochronology correlations, which have relatively large uncertainties. Because the initial data set was restricted to the upper half of the BL00-1 core, we inferred additional ages by direct correlation to the independently dated paleoclimate record from Devils Hole. We developed an age model for the entire core using statistical methods that consider both the uncertainties of the original data and that of the curve-fitting process, with a combination of our initial data set and the climate correlations as control points. This age model represents our best estimate of the chronology of deposition in Bear Lake. Because the age model contains assumptions about the correlation of Bear Lake to other climate records, the model cannot be used to address some paleoclimate questions, such as phase relationships with other areas.

  6. An early, novel illness severity score to predict outcome after cardiac arrest.

    PubMed

    Rittenberger, Jon C; Tisherman, Samuel A; Holm, Margo B; Guyette, Francis X; Callaway, Clifton W

    2011-11-01

    Illness severity scores are commonly employed in critically ill patients to predict outcome. To date, prior scores for post-cardiac arrest patients rely on some event-related data. We developed an early, novel post-arrest illness severity score to predict survival, good outcome and development of multiple organ failure (MOF) after cardiac arrest. Retrospective review of data from adults treated after in-hospital or out-of-hospital cardiac arrest in a single tertiary care facility between 1/1/2005 and 12/31/2009. In addition to clinical data, initial illness severity was measured using serial organ function assessment (SOFA) scores and full outline of unresponsiveness (FOUR) scores at hospital or intensive care unit arrival. Outcomes were hospital mortality, good outcome (discharge to home or rehabilitation) and development of multiple organ failure (MOF). Single-variable logistic regression followed by Chi-squared automatic interaction detector (CHAID) was used to determine predictors of outcome. Stepwise multivariate logistic regression was used to determine the independent association between predictors and each outcome. The Hosmer-Lemeshow test was used to evaluate goodness of fit. The n-fold method was used to cross-validate each CHAID analysis and the difference between the misclassification risk estimates was used to determine model fit. Complete data from 457/495 (92%) subjects identified distinct categories of illness severity using combined FOUR motor and brainstem subscales, and combined SOFA cardiovascular and respiratory subscales: I. Awake; II. Moderate coma without cardiorespiratory failure; III. Moderate coma with cardiorespiratory failure; and IV. Severe coma. Survival was independently associated with category (I: OR 58.65; 95% CI 27.78, 123.82; II: OR 14.60; 95% CI 7.34, 29.02; III: OR 10.58; 95% CI 4.86, 23.00). Category was also similarly associated with good outcome and development of MOF. The proportion of subjects in each category changed over time. Initial illness severity explains much of the variation in cardiac arrest outcome. This model provides prognostic information at hospital arrival and may be used to stratify patients in future studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Comparative analysis on the probability of being a good payer

    NASA Astrophysics Data System (ADS)

    Mihova, V.; Pavlov, V.

    2017-10-01

    Credit risk assessment is crucial for the bank industry. The current practice uses various approaches for the calculation of credit risk. The core of these approaches is the use of multiple regression models, applied in order to assess the risk associated with the approval of people applying for certain products (loans, credit cards, etc.). Based on data from the past, these models try to predict what will happen in the future. Different data requires different type of models. This work studies the causal link between the conduct of an applicant upon payment of the loan and the data that he completed at the time of application. A database of 100 borrowers from a commercial bank is used for the purposes of the study. The available data includes information from the time of application and credit history while paying off the loan. Customers are divided into two groups, based on the credit history: Good and Bad payers. Linear and logistic regression are applied in parallel to the data in order to estimate the probability of being good for new borrowers. A variable, which contains value of 1 for Good borrowers and value of 0 for Bad candidates, is modeled as a dependent variable. To decide which of the variables listed in the database should be used in the modelling process (as independent variables), a correlation analysis is made. Due to the results of it, several combinations of independent variables are tested as initial models - both with linear and logistic regression. The best linear and logistic models are obtained after initial transformation of the data and following a set of standard and robust statistical criteria. A comparative analysis between the two final models is made and scorecards are obtained from both models to assess new customers at the time of application. A cut-off level of points, bellow which to reject the applications and above it - to accept them, has been suggested for both the models, applying the strategy to keep the same Accept Rate as in the current data.

  8. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  9. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    PubMed

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Comparison of alternative spatial resolutions in the application of a spatially distributed biogeochemical model over complex terrain

    USGS Publications Warehouse

    Turner, D.P.; Dodson, R.; Marks, D.

    1996-01-01

    Spatially distributed biogeochemical models may be applied over grids at a range of spatial resolutions, however, evaluation of potential errors and loss of information at relatively coarse resolutions is rare. In this study, a georeferenced database at the 1-km spatial resolution was developed to initialize and drive a process-based model (Forest-BGC) of water and carbon balance over a gridded 54976 km2 area covering two river basins in mountainous western Oregon. Corresponding data sets were also prepared at 10-km and 50-km spatial resolutions using commonly employed aggregation schemes. Estimates were made at each grid cell for climate variables including daily solar radiation, air temperature, humidity, and precipitation. The topographic structure, water holding capacity, vegetation type and leaf area index were likewise estimated for initial conditions. The daily time series for the climatic drivers was developed from interpolations of meteorological station data for the water year 1990 (1 October 1989-30 September 1990). Model outputs at the 1-km resolution showed good agreement with observed patterns in runoff and productivity. The ranges for model inputs at the 10-km and 50-km resolutions tended to contract because of the smoothed topography. Estimates for mean evapotranspiration and runoff were relatively insensitive to changing the spatial resolution of the grid whereas estimates of mean annual net primary production varied by 11%. The designation of a vegetation type and leaf area at the 50-km resolution often subsumed significant heterogeneity in vegetation, and this factor accounted for much of the difference in the mean values for the carbon flux variables. Although area wide means for model outputs were generally similar across resolutions, difference maps often revealed large areas of disagreement. Relatively high spatial resolution analyses of biogeochemical cycling are desirable from several perspectives and may be particularly important in the study of the potential impacts of climate change.

  11. The effects of a flexible visual acuity-driven ranibizumab treatment regimen in age-related macular degeneration: outcomes of a drug and disease model.

    PubMed

    Holz, Frank G; Korobelnik, Jean-François; Lanzetta, Paolo; Mitchell, Paul; Schmidt-Erfurth, Ursula; Wolf, Sebastian; Markabi, Sabri; Schmidli, Heinz; Weichselberger, Andreas

    2010-01-01

    Differences in treatment responses to ranibizumab injections observed within trials involving monthly (MARINA and ANCHOR studies) and quarterly (PIER study) treatment suggest that an individualized treatment regimen may be effective in neovascular age-related macular degeneration. In the present study, a drug and disease model was used to evaluate the impact of an individualized, flexible treatment regimen on disease progression. For visual acuity (VA), a model was developed on the 12-month data from ANCHOR, MARINA, and PIER. Data from untreated patients were used to model patient-specific disease progression in terms of VA loss. Data from treated patients from the period after the three initial injections were used to model the effect of predicted ranibizumab vitreous concentration on VA loss. The model was checked by comparing simulations of VA outcomes after monthly and quarterly injections during this period with trial data. A flexible VA-guided regimen (after the three initial injections) in which treatment is initiated by loss of >5 letters from best previously observed VA scores was simulated. Simulated monthly and quarterly VA-guided regimens showed good agreement with trial data. Simulation of VA-driven individualized treatment suggests that this regimen, on average, sustains the initial gains in VA seen in clinical trials at month 3. The model predicted that, on average, to maintain initial VA gains, an estimated 5.1 ranibizumab injections are needed during the 9 months after the three initial monthly injections, which amounts to a total of 8.1 injections during the first year. A flexible, individualized VA-guided regimen after the three initial injections may sustain vision improvement with ranibizumab and could improve cost-effectiveness and convenience and reduce drug administration-associated risks.

  12. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  13. WTA estimates using the method of paired comparison: tests of robustness

    Treesearch

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  14. Statistical alignment: computational properties, homology testing and goodness-of-fit.

    PubMed

    Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G

    2000-09-08

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.

  15. Fuel Burn Estimation Model

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  16. Evaluation of NU-WRF Rainfall Forecasts for IFloodS

    NASA Technical Reports Server (NTRS)

    Wu, Di; Peters-Lidard, Christa; Tao, Wei-Kuo; Petersen, Walter

    2016-01-01

    The Iowa Flood Studies (IFloodS) campaign was conducted in eastern Iowa as a pre- GPM-launch campaign from 1 May to 15 June 2013. During the campaign period, real time forecasts are conducted utilizing NASA-Unified Weather Research and Forecasting (NU-WRF) model to support the everyday weather briefing. In this study, two sets of the NU-WRF rainfall forecasts are evaluated with Stage IV and Multi-Radar Multi-Sensor (MRMS) Quantitative Precipitation Estimation (QPE), with the objective to understand the impact of Land Surface initialization on the predicted precipitation. NU-WRF is also compared with North American Mesoscale Forecast System (NAM) 12 kilometer forecast. In general, NU-WRF did a good job at capturing individual precipitation events. NU-WRF is also able to replicate a better rainfall spatial distribution compare with NAM. Further sensitivity tests show that the high-resolution makes a positive impact on rainfall forecast. The two sets of NU-WRF simulations produce very close rainfall characteristics. The Land surface initialization do not show significant impact on short term rainfall forecast, and it is largely due to the soil conditions during the field campaign period.

  17. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  18. [The problem of hemoperfusion in poisonings: ineffectiveness in maprotiline poisoning].

    PubMed

    Hofmann, V; Riess, W; Descoeudres, C; Studer, H

    1980-02-23

    A case of self-poisoning with maprotiline presenting with coma stage III was treated by resin hemoperfusion for 9 hours using an XAD-4 resin cartridge. Plasma levels of about 800 ng/ml maprotilin were initially found. After 5 hours of hemoperfusion progredient clinical improvement was noticed without decreasing tendency of the blood drug levels. The theoretical extraction efficiency calculated from the maprotiline blood levels and the perfusion rate yielded 50 mg for maprotiline and 16 mg for desmethylmaprotiline and was in good agreement with 60.5 mg of maprotiline and 17.3 mg of desmethylmaprotiline recovered from the resin cartridge at the end of the hemoperfusion. The in vitro binding capacity for maprotiline was estimated to be 230 mg per g of resin. These results demonstrate that XAD-4 resin efficiently binds maprotiline. However, because of the very low blood concentrations due to the large volume of distribution, whole body concentrations are minimally affected by resin hemoperfusion. Main complications consisted in thrombocytopenia extending over 24 hours after stopping hemoperfusion, anemia, a short initial decrease of blood pressure and an episode of premature ventricular beats.

  19. Segment-based acoustic models for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  20. Ignition in an Atomistic Model of Hydrogen Oxidation.

    PubMed

    Alaghemandi, Mohammad; Newcomb, Lucas B; Green, Jason R

    2017-03-02

    Hydrogen is a potential substitute for fossil fuels that would reduce the combustive emission of carbon dioxide. However, the low ignition energy needed to initiate oxidation imposes constraints on the efficiency and safety of hydrogen-based technologies. Microscopic details of the combustion processes, ephemeral transient species, and complex reaction networks are necessary to control and optimize the use of hydrogen as a commercial fuel. Here, we report estimates of the ignition time of hydrogen-oxygen mixtures over a wide range of equivalence ratios from extensive reactive molecular dynamics simulations. These data show that the shortest ignition time corresponds to a fuel-lean mixture with an equivalence ratio of 0.5, where the number of hydrogen and oxygen molecules in the initial mixture are identical, in good agreement with a recent chemical kinetic model. We find two signatures in the simulation data precede ignition at pressures above 200 MPa. First, there is a peak in hydrogen peroxide that signals ignition is imminent in about 100 ps. Second, we find a strong anticorrelation between the ignition time and the rate of energy dissipation, suggesting the role of thermal feedback in stimulating ignition.

  1. Potassium topping cycles for stationary power. [conceptual analysis

    NASA Technical Reports Server (NTRS)

    Rossbach, R. J.

    1975-01-01

    A design study was made of the potassium topping cycle powerplant for central station use. Initially, powerplant performance and economics were studied parametrically by using an existing steam plant as the bottom part of the cycle. Two distinct powerplants were identified which had good thermodynamic and economic performance. Conceptual designs were made of these two powerplants in the 1200 MWe size, and capital and operating costs were estimated for these powerplants. A technical evaluation of these plants was made including conservation of fuel resources, environmental impact, technology status, and degree of development risk. It is concluded that the potassium topping cycle could have a significant impact on national goals such as air and water pollution control and conservation of natural resources because of its higher energy conversion efficiency.

  2. Assembly and analysis of fragmentation data for liquid propellant vessels

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Parr, V. B.; Bessey, R. L.; Cox, P. A.

    1974-01-01

    Fragmentation data was assembled and analyzed for exploding liquid propellant vessels. These data were to be retrieved from reports of tests and accidents, including measurements or estimates of blast yield, etc. A significant amount of data was retrieved from a series of tests conducted for measurement of blast and fireball effects of liquid propellant explosions (Project PYRO), a few well-documented accident reports, and a series of tests to determine auto-ignition properties of mixing liquid propellants. The data were reduced and fitted to various statistical functions. Comparisons were made with methods of prediction for blast yield, initial fragment velocities, and fragment range. Reasonably good correlation was achieved. Methods presented in the report allow prediction of fragment patterns, given type and quantity of propellant, type of accident, and time of propellant mixing.

  3. [Precordial mapping and enzymatic analysis for estimating infarct size in man. A comparative study (author's transl)].

    PubMed

    Tommasini, G; Cobelli, F; Birolli, M; Oddone, A; Orlandi, M; Malusardi, R

    1976-01-01

    To investigate the relationships between electrocardiographic and enzymatic indexes of infarct size (I.S.), a group of 19 patients with anterior infarction was studied by serial precordial mapping and CPK curves analysis. The time course of ST and QRS changes was examined and a sharp, spontaneous fall of sigmaST was shown to occur within 10-12 hours after onset of symptoms, followed by a gradual rise. sigmaST on admission appears to be a poor predictor of subsequent enzymatic I.S. (r=0.49). Good correlations with I.S. were observed, for sigmaST at 48-96 hours (r=0.82) and, especially, for the percent decrease of sigmaR, with respect to the initial values (deltaR%), (r=0.94).

  4. Detection of tunnel excavation using fiber optic reflectometry: experimental validation

    NASA Astrophysics Data System (ADS)

    Linker, Raphael; Klar, Assaf

    2013-06-01

    Cross-border smuggling tunnels enable unmonitored movement of people and goods, and pose a severe threat to homeland security. In recent years, we have been working on the development of a system based on fiber- optic Brillouin time domain reflectometry (BOTDR) for detecting tunnel excavation. In two previous SPIE publications we have reported the initial development of the system as well as its validation using small-scale experiments. This paper reports, for the first time, results of full-scale experiments and discusses the system performance. The results confirm that distributed measurement of strain profiles in fiber cables buried at shallow depth enable detection of tunnel excavation, and by proper data processing, these measurements enable precise localization of the tunnel, as well as reasonable estimation of its depth.

  5. 75 FR 12433 - National Export Initiative

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-16

    ... Order 13534 of March 11, 2010 National Export Initiative By the authority vested in me as President by... performance will, in turn, create good high-paying jobs. The National Export Initiative (NEI) shall be an Administration initiative to improve conditions that directly affect the private sector's ability to export. The...

  6. 19 CFR 10.532 - Integrated Sourcing Initiative.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Integrated Sourcing Initiative. 10.532 Section 10... Trade Agreement Rules of Origin § 10.532 Integrated Sourcing Initiative. (a) For purposes of General... Sourcing Initiative if: (1) The good, in its condition as imported, is both classified in a tariff...

  7. I Am Mentor, I Am Coach

    ERIC Educational Resources Information Center

    Augustine-Shaw, Donna; Reilly, Marceta

    2017-01-01

    Preparing good leaders depends not only on providing good initial professional learning, but also on creating a strong support structure during the early years of practice. However, what good mentoring looks and sounds like varies widely in practice. Many mentoring programs for education leaders consist of buddy-like relationships that provide…

  8. SigrafW: An easy-to-use program for fitting enzyme kinetic data.

    PubMed

    Leone, Francisco Assis; Baranauskas, José Augusto; Furriel, Rosa Prazeres Melo; Borin, Ivana Aparecida

    2005-11-01

    SigrafW is Windows-compatible software developed using the Microsoft® Visual Basic Studio program that uses the simplified Hill equation for fitting kinetic data from allosteric and Michaelian enzymes. SigrafW uses a modified Fibonacci search to calculate maximal velocity (V), the Hill coefficient (n), and the enzyme-substrate apparent dissociation constant (K). The estimation of V, K, and the sum of the squares of residuals is performed using a Wilkinson nonlinear regression at any Hill coefficient (n). In contrast to many currently available kinetic analysis programs, SigrafW shows several advantages for the determination of kinetic parameters of both hyperbolic and nonhyperbolic saturation curves. No initial estimates of the kinetic parameters are required, a measure of the goodness-of-the-fit for each calculation performed is provided, the nonlinear regression used for calculations eliminates the statistical bias inherent in linear transformations, and the software can be used for enzyme kinetic simulations either for educational or research purposes. Persons interested in receiving a free copy of the software should contact Dr. F. A. Leone. Copyright © 2005 International Union of Biochemistry and Molecular Biology, Inc.

  9. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  10. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less

  11. Radar modulation classification using time-frequency representation and nonlinear regression

    NASA Astrophysics Data System (ADS)

    De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric

    1999-09-01

    In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.

  12. Experimental parameter identification of a multi-scale musculoskeletal model controlled by electrical stimulation: application to patients with spinal cord injury.

    PubMed

    Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David

    2013-06-01

    We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.

  13. Estimating 3D topographic map of optic nerve head from a single fundus image

    NASA Astrophysics Data System (ADS)

    Wang, Peipei; Sun, Jiuai

    2018-04-01

    Optic nerve head also called optic disc is the distal portion of optic nerve locating and clinically visible on the retinal surface. It is a 3 dimensional elliptical shaped structure with a central depression called the optic cup. This shape of the ONH and the size of the depression can be varied due to different retinopathy or angiopathy, therefore the estimation of topography of optic nerve head is significant for assisting diagnosis of those retinal related complications. This work describes a computer vision based method, i.e. shape from shading (SFS) to recover and visualize 3D topographic map of optic nerve head from a normal fundus image. The work is expected helpful for assessing those complications associated the deformation of optic nerve head such as glaucoma and diabetes. The illumination is modelled as uniform over the area around optic nerve head and its direction estimated from the available image. The Tsai discrete method has been employed to recover the 3D topographic map of the optic nerve head. The initial experimental result demonstrates our approach works on most of fundus images and provides a cheap, but good alternation for rendering and visualizing the topographic information of the optic nerve head for potential clinical use.

  14. Estimating ocean-air heat fluxes during cold air outbreaks by satellite

    NASA Technical Reports Server (NTRS)

    Chou, S. H.; Atlas, D.

    1981-01-01

    Nomograms of mean column heating due to surface sensible and latent heat fluxes were developed. Mean sensible heating of the cloud free region is related to the cloud free path (CFP, the distance from the shore to the first cloud formation) and the difference between land air and sea surface temperatures, theta sub 1 and theta sub 0, respectively. Mean latent heating is related to the CFP and the difference between land air and sea surface humidities q sub 1 and q sub 0 respectively. Results are also applicable to any path within the cloud free region. Corresponding heat fluxes may be obtained by multiplying the mean heating by the mean wind speed in the boundary layer. The sensible heating estimated by the present method is found to be in good agreement with that computed from the bulk transfer formula. The sensitivity of the solutions to the variations in the initial coastal soundings and large scale subsidence is also investigated. The results are not sensitive to divergence but are affected by the initial lapse rate of potential temperature; the greater the stability, the smaller the heating, other things being equal. Unless one knows the lapse rate at the shore, this requires another independent measurement. For this purpose the downwind slope of the square of the boundary layer height is used, the mean value of which is also directly proportional to the mean sensible heating. The height of the boundary layer should be measurable by future spaceborn lidar systems.

  15. Uncertainties and Systematic Effects on the estimate of stellar masses in high z galaxies

    NASA Astrophysics Data System (ADS)

    Salimbeni, S.; Fontana, A.; Giallongo, E.; Grazian, A.; Menci, N.; Pentericci, L.; Santini, P.

    2009-05-01

    We discuss the uncertainties and the systematic effects that exist in the estimates of the stellar masses of high redshift galaxies, using broad band photometry, and how they affect the deduced galaxy stellar mass function. We use at this purpose the latest version of the GOODS-MUSIC catalog. In particular, we discuss the impact of different synthetic models, of the assumed initial mass function and of the selection band. Using Chariot & Bruzual 2007 and Maraston 2005 models we find masses lower than those obtained from Bruzual & Chariot 2003 models. In addition, we find a slight trend as a function of the mass itself comparing these two mass determinations with that from Bruzual & Chariot 2003 models. As consequence, the derived galaxy stellar mass functions show diverse shapes, and their slope depends on the assumed models. Despite these differences, the overall results and scenario is observed in all these cases. The masses obtained with the assumption of the Chabrier initial mass function are in average 0.24 dex lower than those from the Salpeter assumption, at all redshifts, causing a shift of galaxy stellar mass function of the same amount. Finally, using a 4.5 μm-selected sample instead of a Ks-selected one, we add a new population of highly absorbed, dusty galaxies at z~=2-3 of relatively low masses, yielding stronger constraints on the slope of the galaxy stellar mass function at lower masses.

  16. How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?

    NASA Astrophysics Data System (ADS)

    Lin, Zesen; Fang, Guanwen; Kong, Xu

    2016-12-01

    Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.

  17. Cost-benefit analysis involving addictive goods: contingent valuation to estimate willingness-to-pay for smoking cessation.

    PubMed

    Weimer, David L; Vining, Aidan R; Thomas, Randall K

    2009-02-01

    The valuation of changes in consumption of addictive goods resulting from policy interventions presents a challenge for cost-benefit analysts. Consumer surplus losses from reduced consumption of addictive goods that are measured relative to market demand schedules overestimate the social cost of cessation interventions. This article seeks to show that consumer surplus losses measured using a non-addicted demand schedule provide a better assessment of social cost. Specifically, (1) it develops an addiction model that permits an estimate of the smoker's compensating variation for the elimination of addiction; (2) it employs a contingent valuation survey of current smokers to estimate their willingness-to-pay (WTP) for a treatment that would eliminate addiction; (3) it uses the estimate of WTP from the survey to calculate the fraction of consumer surplus that should be viewed as consumer value; and (4) it provides an estimate of this fraction. The exercise suggests that, as a tentative first and rough rule-of-thumb, only about 75% of the loss of the conventionally measured consumer surplus should be counted as social cost for policies that reduce the consumption of cigarettes. Additional research to estimate this important rule-of-thumb is desirable to address the various caveats relevant to this study. Copyright (c) 2008 John Wiley & Sons, Ltd.

  18. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  19. CH-47F Improved Cargo Helicopter (CH-47F)

    DTIC Science & Technology

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total

  20. Estimation of the Arrival Time and Duration of a Radio Signal with Unknown Amplitude and Initial Phase

    NASA Astrophysics Data System (ADS)

    Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.

    2018-05-01

    We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.

  1. Self-perceived health in older Europeans: Does the choice of survey matter?

    PubMed Central

    Croezen, Simone; Burdorf, Alex

    2016-01-01

    Abstract Background: Cross-national comparisons of health in European countries provide crucial information to monitor health and disease within and between countries and to inform policy and research priorities. However, variations in estimates might occur when information from cross-national European surveys with different characteristics are used. We compared the prevalence of very good or good self-perceived health across 10 European countries according to three European surveys and investigated which survey characteristics contributed to differences in prevalence estimates. Methods: We used aggregate data from 2004 to 2005 of respondents aged 55–64 years from the European Union Statistics on Income and Living Conditions (EU-SILC), the Survey of Health, Ageing and Retirement in Europe (SHARE) and the European Social Survey (ESS). Across the surveys, self-perceived health was assessed by the same question with response options ranging from very good to very bad. Results: Despite a good correlation between the surveys (intraclass correlation coefficient: 0.77), significant differences were found in prevalence estimates of very good or good self-perceived health. The survey response, sample size and survey mode contributed statistically significantly to the differences between the surveys. Multilevel linear regression analyses, adjusted for survey characteristics, showed a higher prevalence for SHARE (+6.96, 95% CIs: 3.14 to 10.8) and a lower prevalence (−3.12; 95% CIs: −7.11 to 0.86) for ESS, with EU-SILC as the reference survey. Conclusion: Three important health surveys in Europe showed substantial differences for presence of very good or good self-perceived health. These differences limit the usefulness for direct comparisons across studies in health policies for Europe. PMID:26989125

  2. 78 FR 77420 - Certain Oil Country Tubular Goods From the Republic of Turkey: Preliminary Negative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-23

    ... Goods From the Republic of Turkey: Preliminary Negative Countervailing Duty Determination and Alignment... provided to producers and exporters of certain oil tubular goods (OCTG) from the Republic of Turkey (Turkey... Department also initiated AD investigations of OCTG from Turkey and several other countries.\\1\\ The CVD...

  3. Parameterization of spectral baseline directly from short echo time full spectra in 1 H-MRS.

    PubMed

    Lee, Hyeong Hun; Kim, Hyeonjin

    2017-09-01

    To investigate the feasibility of parameterizing macromolecule (MM) resonances directly from short echo time (TE) spectra rather than pre-acquired, T 1 -weighted, metabolite-nulled spectra in 1 H-MRS. Initial line parameters for metabolites and MMs were set for rat brain spectra acquired at 9.4 Tesla upon a priori knowledge. Then, MM line parameters were optimized over several steps with fixed metabolite line parameters. The proposed method was tested by estimating metabolite T 1 . The results were compared with those obtained with two existing methods. Furthermore, subject-specific, spin density-weighted, MM model spectra were generated according to the MM line parameters from the proposed method for metabolite quantification. The results were compared with those obtained with subject-specific, T 1 -weighted, metabolite-nulled spectra. The metabolite T 1 were largely in close agreement among the three methods. The spin density-weighted MM resonances from the proposed method were in good agreement with the T 1 -weighted, metabolite-nulled spectra except for the MM resonance at ∼3.2 ppm. The metabolite concentrations estimated by incorporating these two different spectral baselines were also in good agreement except for several metabolites with resonances at ∼3.2 ppm. The MM parameterization directly from short-TE spectra is feasible. Further development of the method may allow for better representation of spectral baseline with negligible T 1 -weighting. Magn Reson Med 78:836-847, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Deriving realistic source boundary conditions for a CFD simulation of concentrations in workroom air.

    PubMed

    Feigley, Charles E; Do, Thanh H; Khan, Jamil; Lee, Emily; Schnaufer, Nicholas D; Salzberg, Deborah C

    2011-05-01

    Computational fluid dynamics (CFD) is used increasingly to simulate the distribution of airborne contaminants in enclosed spaces for exposure assessment and control, but the importance of realistic boundary conditions is often not fully appreciated. In a workroom for manufacturing capacitors, full-shift samples for isoamyl acetate (IAA) were collected for 3 days at 16 locations, and velocities were measured at supply grills and at various points near the source. Then, velocity and concentration fields were simulated by 3-dimensional steady-state CFD using 295K tetrahedral cells, the k-ε turbulence model, standard wall function, and convergence criteria of 10(-6) for all scalars. Here, we demonstrate the need to represent boundary conditions accurately, especially emission characteristics at the contaminant source, and to obtain good agreement between observations and CFD results. Emission rates for each day were determined from six concentrations measured in the near field and one upwind using an IAA mass balance. The emission was initially represented as undiluted IAA vapor, but the concentrations estimated using CFD differed greatly from the measured concentrations. A second set of simulations was performed using the same IAA emission rates but a more realistic representation of the source. This yielded good agreement with measured values. Paying particular attention to the region with highest worker exposure potential-within 1.3 m of the source center-the air speed and IAA concentrations estimated by CFD were not significantly different from the measured values (P = 0.92 and P = 0.67, respectively). Thus, careful consideration of source boundary conditions greatly improved agreement with the measured values.

  5. Induced Seismicity Related to Hydrothermal Operation of Geothermal Projects in Southern Germany - Observations and Future Directions

    NASA Astrophysics Data System (ADS)

    Megies, T.; Kraft, T.; Wassermann, J. M.

    2015-12-01

    Geothermal power plants in Southern Germany are operated hydrothermally and at low injection pressures in a seismically inactive region considered very low seismic hazard. For that reason, permit authorities initially enforced no monitoring requirements on the operating companies. After a series of events perceived by local residents, a scientific monitoring survey was conducted over several years, revealing several hundred induced earthquakes at one project site.We summarize results from monitoring at this site, including absolute locations in a local 3D velocity model, relocations using double-difference and master-event methods and focal mechanism determinations that show a clear association with fault structures in the reservoir which extend down into the underlying crystalline basement. To better constrain the shear wave velocity models that have a strong influence on hypocentral depth estimates, several different approaches to estimate layered vp/vs models are employed.Results from these studies have prompted permit authorities to start imposing minimal monitoring requirements. Since in some cases these geothermal projects are only separated by a few kilometers, we investigate the capabilities of an optimized network combining the monitoring resources of six neighboring well doublets in a joint network. Optimization is taking into account the -- on this local scale, urban environment -- highly heterogeneous background noise conditions and the feasibility of potential monitoring sites, removing non-viable sites before the optimization procedure. First results from the actual network realization show good detection capabilities for small microearthquakes despite the minimum instrumentational effort, demonstrating the benefits of good coordination of monitoring efforts.

  6. Monolayer-crystal streptavidin support films provide an internal standard of cryo-EM image quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Bong-Gyoon; Watson, Zoe; Cate, Jamie H. D.

    Analysis of images of biotinylated Escherichia coli 70S ribosome particles, bound to streptavidin affinity grids, demonstrates that the image-quality of particles can be predicted by the image-quality of the monolayer crystalline support film. Also, the quality of the Thon rings is a good predictor of the image-quality of particles, but only when images of the streptavidin crystals extend to relatively high resolution. When the estimated resolution of streptavidin was 5 Å or worse, for example, the ribosomal density map obtained from 22,697 particles went to only 9.5 Å, while the resolution of the map reached 4.0 Å for the samemore » number of particles, when the estimated resolution of streptavidin crystal was 4 Å or better. It thus is easy to tell which images in a data set ought to be retained for further work, based on the highest resolution seen for Bragg peaks in the computed Fourier transforms of the streptavidin component. The refined density map obtained from 57,826 particles obtained in this way extended to 3.6 Å, a marked improvement over the value of 3.9 Å obtained previously from a subset of 52,433 particles obtained from the same initial data set of 101,213 particles after 3-D classification. These results are consistent with the hypothesis that interaction with the air-water interface can damage particles when the sample becomes too thin. Finally, streptavidin monolayer crystals appear to provide a good indication of when that is the case.« less

  7. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  8. Modeling study of air pollution due to the manufacture of export goods in China's Pearl River Delta.

    PubMed

    Streets, David G; Yu, Carolyne; Bergin, Michael H; Wang, Xuemei; Carmichael, Gregory R

    2006-04-01

    The Pearl River Delta is a major manufacturing region on the south coast of China that produces more than dollar 100 billion of goods annually for export to North America, Europe, and other parts of Asia. Considerable air pollution is caused by the manufacturing industries themselves and by the power plants, trucks, and ships that support them. We estimate that 10-40% of emissions of primary SO2, NO(x), RSP, and VOC in the region are caused by export-related activities. Using the STEM-2K1 atmospheric transport model, we estimate that these emissions contribute 5-30% of the ambient concentrations of SO2, NO(x), NO(z), and VOC in the region. One reason that the exported goods are cheap and therefore attractive to consumers in developed countries is that emission controls are lacking or of low performance. We estimate that state-of-the-art controls could be installed at an annualized cost of dollar 0.3-3 billion, representing 0.3-3% of the value of the goods produced. We conclude that mitigation measures could be adopted without seriously affecting the prices of exported goods and would achieve considerable human health and other benefits in the form of reduced air pollutant concentrations in densely populated urban areas.

  9. A Spatially-Explicit Technique for Evaluation of Alternative ...

    EPA Pesticide Factsheets

    Ecosystems contribute to maintaining human well-being directly through provision of goods and indirectly through provision of services that support clean water, clean air, flood protection and atmospheric stability. Transparently accounting for biophysical attributes from which humans derive benefit is essential to support dialog among the public, resource managers, decision makers, and scientists. We analyzed the potential ecosystem goods and services production from alternative future land use scenarios in the US Tampa Bay region. Ecosystem goods and service metrics included carbon sequestration, nitrogen removal, air pollutant removal, and stormwater retention. Each scenario was compared to a 2006 baseline land use. Estimated production of denitrification services changed by 28% and carbon sequestration by 20% between 2006 and the “business as usual” scenario. An alternative scenario focused on “natural resource protection” resulted in an estimated 9% loss in air pollution removal. Stormwater retention was estimated to change 18% from 2006 to 2060 projections. Cost effective areas for conservation, almost 1588 ha, beyond current conservation lands, were identified by comparing ecosystem goods and services production to assessed land values. Our ecosystem goods and services approach provides a simple and quantitative way to examine a more complete set of potential outcomes from land use decisions. This study demonstrates an approach for spatially expli

  10. Procedural instructions, principles, and examples: how to structure instructions for procedural tasks to enhance performance, learning, and transfer.

    PubMed

    Eiriksdottir, Elsa; Catrambone, Richard

    2011-12-01

    The goal of this article is to investigate how instructions can be constructed to enhance performance and learning of procedural tasks. Important determinants of the effectiveness of instructions are type of instructions (procedural information, principles, and examples) and pedagogical goal (initial performance, learning, and transfer). Procedural instructions describe how to complete tasks in a stepwise manner, principles describe rules governing the tasks, and examples demonstrate how instances of the task are carried out. The authors review the research literature associated with each type of instruction to identify factors determining effectiveness for different pedagogical goals. The results suggest a trade-off between usability and learnability. Specific instructions help initial performance, whereas more general instructions, requiring problem solving, help learning and transfer. Learning from instructions takes cognitive effort, and research suggests that learners typically opt for low effort. However, it is possible to meet both goals of good initial performance and learning with methods such as fading and by combining different types of instructions. How instructions are constructed influences their effectiveness for the goals of good initial performance, learning, and transfer, and it is therefore important for researchers and practitioners alike to define the pedagogical goal of instructions. If the goal is good initial performance, then instructions should highly resemble the task at hand (e.g., in the form of detailed procedural instructions and examples), but if the goal is good learning and transfer, then instructions should be more abstract, inducing learners to expend the necessary cognitive effort for learning.

  11. Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation

    ERIC Educational Resources Information Center

    Nižnan, Juraj

    2015-01-01

    Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…

  12. Topical photodynamic therapy with 5-aminolevulinic acid in the treatment of actinic keratoses: a first clinical study

    NASA Astrophysics Data System (ADS)

    Karrer, Sigrid; Szeimies, Rolf-Markus; Sauerwald, Angela; Landthaler, Michael

    1996-01-01

    In this first clinical study performed according to GCP- (good clinical practice) guidelines, efficacy, and tolerability of topical photodynamic therapy (PDT) using 5-aminolevulinic acid (ALA) were tested in the treatment of actinic keratoses. Ten patients (6 f, 4 m) with 36 lesions (19 located on hands and arms, 17 on the head) received ALA-PDT once. Five to six hours after occlusive application of ALA (water-in-oil-emulsion containing 10% ALA) irradiation was performed with an incoherent light source. Up to 3 months after treatment patients were monitored. A score evaluating infiltration and keratosis of treated actinic keratoses allowed us to estimate therapeutic efficacy. Compared to the initial score (100%) significantly lower score-sums were observed at the 28 day follow-up at both localizations (head: 15%; hand: 67%). Complete remission (score sum 0) resulted in 71% of actinic keratoses localized on the head. Except for slight pain and burning sensations during and after irradiation there were no notable side effects. This study proved good efficacy and tolerability of topical PDT in the treatment of actinic keratoses. Whether PDT is able to compete with established treatment modalities remains to be shown in further studies.

  13. Development, validity, and reliability of a ballet-specific aerobic fitness test.

    PubMed

    Twitchett, Emily; Nevill, Alan; Angioi, Manuela; Koutedakis, Yiannis; Wyon, Matthew

    2011-09-01

    The aim of this study was to develop and assess the reliability and validity of a multi-stage, ballet-specific aerobic fitness test to be used in a dance studio setting. The test consists of five stages, each four minutes long, that increase in intensity. It uses classical ballet movement of an intermediate-level of difficulty, thus emphasizing physiological demand rather than skill. The demand of each stage was determined by calculating the mean oxygen uptake during its final minute using a portable gas analyser. After an initial familiarization period, eight female subjects performed the test twice within seven days. The results showed significant differences in oxygen consumption between stages (p < 0.001), but not between trials. Pearson correlation co-efficients produced a very good linear relationship between trials (r = 0.998, p < 0.001). Bland-Altman reliability analysis revealed the 95% limits of agreement to be ± 6.2 ml·kg(-1)·min(-1), showing good agreement between trials. The oxygen uptake in our subjects equated positively to previous estimates for class and performance, confirming validity. It was concluded that the test is suitable for use among classical ballet dancers, with many possible applications.

  14. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  15. Disaster debris estimation using high-resolution polarimetric stereo-SAR

    NASA Astrophysics Data System (ADS)

    Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki

    2016-10-01

    This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.

  16. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections.

  17. Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code

    NASA Astrophysics Data System (ADS)

    Shprits, Yuri; Kellerman, Adam

    2016-07-01

    We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.

  18. Calibration of infiltration parameters on hydrological tank model using runoff coefficient of rational method

    NASA Astrophysics Data System (ADS)

    Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery

    2017-09-01

    In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.

  19. On the equilibrium contact angle of sessile liquid drops from molecular dynamics simulations.

    PubMed

    Ravipati, Srikanth; Aymard, Benjamin; Kalliadasis, Serafim; Galindo, Amparo

    2018-04-28

    We present a new methodology to estimate the contact angles of sessile drops from molecular simulations by using the Gaussian convolution method of Willard and Chandler [J. Phys. Chem. B 114, 1954-1958 (2010)] to calculate the coarse-grained density from atomic coordinates. The iso-density contour with average coarse-grained density value equal to half of the bulk liquid density is identified as the average liquid-vapor (LV) interface. Angles between the unit normal vectors to the average LV interface and unit normal vector to the solid surface, as a function of the distance normal to the solid surface, are calculated. The cosines of these angles are extrapolated to the three-phase contact line to estimate the sessile drop contact angle. The proposed methodology, which is relatively easy to implement, is systematically applied to three systems: (i) a Lennard-Jones (LJ) drop on a featureless LJ 9-3 surface; (ii) an SPC/E water drop on a featureless LJ 9-3 surface; and (iii) an SPC/E water drop on a graphite surface. The sessile drop contact angles estimated with our methodology for the first two systems are shown to be in good agreement with the angles predicted from Young's equation. The interfacial tensions required for this equation are computed by employing the test-area perturbation method for the corresponding planar interfaces. Our findings suggest that the widely adopted spherical-cap approximation should be used with caution, as it could take a long time for a sessile drop to relax to a spherical shape, of the order of 100 ns, especially for water molecules initiated in a lattice configuration on a solid surface. But even though a water drop can take a long time to reach the spherical shape, we find that the contact angle is well established much faster and the drop evolves toward the spherical shape following a constant-contact-angle relaxation dynamics. Making use of this observation, our methodology allows a good estimation of the sessile drop contact angle values even for moderate system sizes (with, e.g., 4000 molecules), without the need for long simulation times to reach the spherical shape.

  20. On the equilibrium contact angle of sessile liquid drops from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ravipati, Srikanth; Aymard, Benjamin; Kalliadasis, Serafim; Galindo, Amparo

    2018-04-01

    We present a new methodology to estimate the contact angles of sessile drops from molecular simulations by using the Gaussian convolution method of Willard and Chandler [J. Phys. Chem. B 114, 1954-1958 (2010)] to calculate the coarse-grained density from atomic coordinates. The iso-density contour with average coarse-grained density value equal to half of the bulk liquid density is identified as the average liquid-vapor (LV) interface. Angles between the unit normal vectors to the average LV interface and unit normal vector to the solid surface, as a function of the distance normal to the solid surface, are calculated. The cosines of these angles are extrapolated to the three-phase contact line to estimate the sessile drop contact angle. The proposed methodology, which is relatively easy to implement, is systematically applied to three systems: (i) a Lennard-Jones (LJ) drop on a featureless LJ 9-3 surface; (ii) an SPC/E water drop on a featureless LJ 9-3 surface; and (iii) an SPC/E water drop on a graphite surface. The sessile drop contact angles estimated with our methodology for the first two systems are shown to be in good agreement with the angles predicted from Young's equation. The interfacial tensions required for this equation are computed by employing the test-area perturbation method for the corresponding planar interfaces. Our findings suggest that the widely adopted spherical-cap approximation should be used with caution, as it could take a long time for a sessile drop to relax to a spherical shape, of the order of 100 ns, especially for water molecules initiated in a lattice configuration on a solid surface. But even though a water drop can take a long time to reach the spherical shape, we find that the contact angle is well established much faster and the drop evolves toward the spherical shape following a constant-contact-angle relaxation dynamics. Making use of this observation, our methodology allows a good estimation of the sessile drop contact angle values even for moderate system sizes (with, e.g., 4000 molecules), without the need for long simulation times to reach the spherical shape.

  1. Estimating the Impact of Earlier ART Initiation and Increased Testing Coverage on HIV Transmission among Men Who Have Sex with Men in Mexico using a Mathematical Model.

    PubMed

    Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan

    2015-01-01

    To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.

  2. Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.

    2014-12-01

    Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.

  3. Evaluation of estimation methods and power of tests of discrete covariates in repeated time-to-event parametric models: application to Gaucher patients treated by imiglucerase.

    PubMed

    Vigan, Marie; Stirnemann, Jérôme; Mentré, France

    2014-05-01

    Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.

  4. Administrative Preparedness Strategies: Expediting Procurement and Contracting Cycle Times During an Emergency.

    PubMed

    Hurst, David; Sharpe, Sharon; Yeager, Valerie A

    We assessed whether administrative preparedness processes that were intended to expedite the acquisition of goods and services during a public health emergency affect estimated procurement and contracting cycle times. We obtained data from 2014-2015 applications to the Hospital Preparedness Program and Public Health Emergency Preparedness (HPP-PHEP) cooperative agreements. We compared the estimated procurement and contracting cycle times of 61 HPP-PHEP awardees that did and did not have certain administrative processes in place. Certain processes, such as statutes allowing for procuring and contracting on the open market, had an effect on reducing the estimated cycle times for obtaining goods and services. Other processes, such as cooperative purchasing agreements, also had an effect on estimated procurement time. For example, awardees with statutes that permitted them to obtain goods and services in the open market had an average procurement cycle time of 6 days; those without such statutes had a cycle time of 17 days ( P = .04). PHEP awardees should consider adopting these or similar processes in an effort to reduce cycle times.

  5. Reef fish communities are spooked by scuba surveys and may take hours to recover

    PubMed Central

    Cheal, Alistair J.; Miller, Ian R.

    2018-01-01

    Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998

  6. Improving Initial Assessment: Guide to Good Practice

    ERIC Educational Resources Information Center

    Knasel, Eddy; Meed, John; Rossetti, Anna; Read, Hilary

    2006-01-01

    This guide is aimed at anyone in work-based training who is responsible for learners during their first few weeks. Readers will (1) understand the value and purpose of initial assessment in key skills and Skills for Life; (2) become familiar with a range of techniques for the initial assessment; (3) plan an initial assessment system that is…

  7. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  8. Improved Estimates of Temporally Coherent Internal Tides and Energy Fluxes from Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    Satellite altimetry has opened a surprising new avenue to observing internal tides in the open ocean. The tidal surface signatures are very small, a few cm at most, but in many areas they are robust, owing to averaging over many years. By employing a simplified two dimensional wave fitting to the surface elevations in combination with climatological hydrography to define the relation between the surface height and the current and pressure at depth, we may obtain rough estimates of internal tide energy fluxes. Initial results near Hawaii with Topex/Poseidon (T/P) data show good agreement with detailed 3D (three dimensional) numerical models, but the altimeter picture is somewhat blurred owing to the widely spaced T/P tracks. The resolution may be enhanced somewhat by using data from the ERS-1 (ESA (European Space Agency) Remote Sensing) and ERS-2 satellite altimeters. The ERS satellite tracks are much more closely spaced (0.72 deg longitude vs. 2.83 deg for T/P), but the tidal estimates are less accurate than those for T/P. All altimeter estimates are also severely affected by noise in regions of high mesoscale variability, and we have obtained some success in reducing this contamination by employing a prior correction for mesoscale variability based on ten day detailed sea surface height maps developed by Le Traon and colleagues. These improvements allow us to more clearly define the internal tide surface field and the corresponding energy fluxes. Results from throughout the global ocean will be presented.

  9. Neural and Neural Gray-Box Modeling for Entry Temperature Prediction in a Hot Strip Mill

    NASA Astrophysics Data System (ADS)

    Barrios, José Angel; Torres-Alvarado, Miguel; Cavazos, Alberto; Leduc, Luis

    2011-10-01

    In hot strip mills, initial controller set points have to be calculated before the steel bar enters the mill. Calculations rely on the good knowledge of rolling variables. Measurements are available only after the bar has entered the mill, and therefore they have to be estimated. Estimation of process variables, particularly that of temperature, is of crucial importance for the bar front section to fulfill quality requirements, and the same must be performed in the shortest possible time to preserve heat. Currently, temperature estimation is performed by physical modeling; however, it is highly affected by measurement uncertainties, variations in the incoming bar conditions, and final product changes. In order to overcome these problems, artificial intelligence techniques such as artificial neural networks and fuzzy logic have been proposed. In this article, neural network-based systems, including neural-based Gray-Box models, are applied to estimate scale breaker entry temperature, given its importance, and their performance is compared to that of the physical model used in plant. Several neural systems and several neural-based Gray-Box models are designed and tested with real data. Taking advantage of the flexibility of neural networks for input incorporation, several factors which are believed to have influence on the process are also tested. The systems proposed in this study were proven to have better performance indexes and hence better prediction capabilities than the physical models currently used in plant.

  10. Educating for Good Work: From Research to Practice

    ERIC Educational Resources Information Center

    Mucinskas, Daniel; Gardner, Howard

    2013-01-01

    Launched in 1995, the GoodWork Project is a long-term, multi-site effort to understand the nature of good work across the professional landscape and to promote its achievement by relevant groups of students and professionals. In this essay, the authors review the goals and methods of the initial research project and its most salient findings. They…

  11. Influence of Weld Porosity on the Integrity of Marine Structures

    DTIC Science & Technology

    1989-02-01

    LEFM provides good estimates of long crack growth, methods developed by Leis [41] could be used to improve the accuracy of fatigue crack propagation...resulted in good estimate for fatigue life and, when viewed in terms of stress, even better estimates. The absolute magnitude of the predictions are...4717 " O t, .4- 10.4--,LO r VI ’A C, CD IDan) m or( i 0 D a-- --- .4- IA ’ .0/ . 0U .. I. L ~ ~ ~ ~ ~ ~ .ODI ’N L 𔃺.d- L D3 0 "’’. U’ U) L )L " ’A~’f0 A

  12. Direct observations of a flare related coronal and solar wind disturbance

    NASA Technical Reports Server (NTRS)

    Gosling, J. T.; Hildner, E.; Macqueen, R. M.; Munro, R. H.; Poland, A. I.; Ross, C. L.

    1975-01-01

    Numerous mass ejections from the sun have been detected with orbiting coronagraphs. Here for the first time we document and discuss the direct association of a coronagraph observed mass ejection, which followed a 2B flare, with a large interplanetary shock wave disturbance observed at 1 AU. Estimates of the mass and energy content of the coronal disturbance are in reasonably good agreement with estimates of the mass and energy content of the solar wind disturbance at 1 AU. The energy estimates as well as the transit time of the disturbance are also in good agreement with numerical models of shock wave propagation in the solar wind.

  13. A Bayesian Assessment of Seismic Semi-Periodicity Forecasts

    NASA Astrophysics Data System (ADS)

    Nava, F.; Quinteros, C.; Glowacka, E.; Frez, J.

    2016-01-01

    Among the schemes for earthquake forecasting, the search for semi-periodicity during large earthquakes in a given seismogenic region plays an important role. When considering earthquake forecasts based on semi-periodic sequence identification, the Bayesian formalism is a useful tool for: (1) assessing how well a given earthquake satisfies a previously made forecast; (2) re-evaluating the semi-periodic sequence probability; and (3) testing other prior estimations of the sequence probability. A comparison of Bayesian estimates with updated estimates of semi-periodic sequences that incorporate new data not used in the original estimates shows extremely good agreement, indicating that: (1) the probability that a semi-periodic sequence is not due to chance is an appropriate estimate for the prior sequence probability estimate; and (2) the Bayesian formalism does a very good job of estimating corrected semi-periodicity probabilities, using slightly less data than that used for updated estimates. The Bayesian approach is exemplified explicitly by its application to the Parkfield semi-periodic forecast, and results are given for its application to other forecasts in Japan and Venezuela.

  14. Feedback in Software and a Desktop Manufacturing Context for Learning Estimation Strategies in Middle School

    ERIC Educational Resources Information Center

    Malcolm, Peter

    2013-01-01

    The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…

  15. Processing electronic photos of Mercury produced by ground based observation

    NASA Astrophysics Data System (ADS)

    Ksanfomality, Leonid

    New images of Mercury have been obtained by processing of ground based observations that were carried out using the short exposure technique. The disk of the planet extendeds usually from 6 to 7 arc seconds, with the linear size of the image in a focal plane of the telescope about 0.3-0.5 mm on the average. Processing initial millisecond electronic photos of the planet is very labour-consuming. Some features of processing of initial millisecond electronic photos by methods of correlation stacking were considered in (Ksanfomality et al., 2005; Ksanfomality and Sprague, 2007). The method uses manual selection of good photos including a so-called pilot- file, the search for which usually must be done manually. The pilot-file is the most successful one, in opinion of the operator. It defines the future result of the stacking. To change pilot-files increases the labor of processing many times. Programs of processing analyze the contents of a sample, find in it any details, and search for recurrence of these almost imperceptible details in thousand of other stacking electronic pictures. If, proceeding from experience, the form and position of a pilot-file still can be estimated, the estimation of a reality of barely distinct details in it is somewhere in between the imaging and imagination. In 2006-07 some programs of automatic processing have been created. Unfortunately, the efficiency of all automatic programs is not as good as manual selection. Together with the selection, some other known methods are used. The point spread function (PSF) is described by a known mathematical function which in its central part decreases smoothly from the center. Usually the width of this function is accepted at a level 0.7 or 0.5 of the maxima. If many thousands of initial electronic pictures are acquired, it is possible during their processing to take advantage of known statistics of random variables and to choose the width of the function at a level, say, 0.9 maxima. Then the resolution of the image improves appreciably. The essential element of processing is the mathematical model of unsharp mask. But this is a two-edged instrument. The result depends on a choice of the size of the mask. If the size is too small, all low spatial frequencies will be lost, and the image becomes grey uniformly; on the contrary, if the size of the unsharp mask is too great, all fine details disappear. In some cases the compromise in selection of parameters of the unsharp mask becomes critical.

  16. On the recovery of electric currents in the liquid core of the Earth

    NASA Astrophysics Data System (ADS)

    Kuslits, Lukács; Prácser, Ernő; Lemperger, István

    2017-04-01

    Inverse geodynamo modelling has become a standard method to get a more accurate image of the processes within the outer core. In this poster excerpts from the preliminary results of an other approach are presented. This comes around the possibility of recovering the currents within the liquid core directly, using Main Magnetic Field data. The approximation of different systems of the flow of charge is possible with various geometries. Based on previous geodynamo simulations, current coils can furnish a good initial geometry for such an estimation. The presentation introduces our preliminary test results and the study of reliability of the applied inversion algorithm for different numbers of coils, distributed in a grid simbolysing the domain between the inner-core and core-mantle boundaries. We shall also present inverted current structures using Main Field model data.

  17. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  18. A new fracture mechanics model for multiple matrix cracks of SiC fiber reinforced brittle-matrix composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okabe, T.; Takeda, N.; Komotori, J.

    1999-11-26

    A new model is proposed for multiple matrix cracking in order to take into account the role of matrix-rich regions in the cross section in initiating crack growth. The model is used to predict the matrix cracking stress and the total number of matrix cracks. The model converts the matrix-rich regions into equivalent penny shape crack sizes and predicts the matrix cracking stress with a fracture mechanics crack-bridging model. The estimated distribution of matrix cracking stresses is used as statistical input to predict the number of matrix cracks. The results show good agreement with the experimental results by replica observations.more » Therefore, it is found that the matrix cracking behavior mainly depends on the distribution of matrix-rich regions in the composite.« less

  19. Expansion Rate Scaling and Energy Evolution in the Electron Diffusion Gauge Experiment.

    NASA Astrophysics Data System (ADS)

    Morrison, Kyle; Davidson, Ronald; Paul, Stephen; Jenkins, Thomas

    2001-10-01

    The expansion of the Electron Diffusion Gauge (EDG) pure electron plasma resulting from collisions with background neutral gas atoms is characterized by the pressure and magnetic field scalings of the profile expansion rate (d/dt) < r^2 >. The measured expansion rate in the higher pressure regime is found to be in good agreement with the classical estimate [ fracddt< r^2 > = frac2 NL e^2 ν_enm ω_c^2 (1+frac2TNL e^2). ] Expansion rate data is obtained for smaller initial plasmas (with outer diameter 1/4 of the trap wall diameter) generated with an improved filament installed in the EDG device, and the data is compared with previous results for larger-filament plasmas. The dynamic energy evolution of the plasma, including electrostatic energy and inferred temperature evolution for several of the measurements, is discussed.

  20. Economic Analysis of Veterans Affairs Initiative to Prevent Methicillin-Resistant Staphylococcus aureus Infections.

    PubMed

    Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A

    2016-05-01

    In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  1. Alternative Fuels Data Center: Massachusetts Transportation Data for

    Science.gov Websites

    Cod National Seashore Initiative for Resiliency in Energy through Vehicles (iREV) Maryland Hybrid Truck Goods Movement Initiative No One Silver Bullet, But a Lot of Silver Beebees Northeast Electric Vehicle Initiative Plug In America Removing Barriers, Implementing Policies and Advancing Alternative

  2. The Electronic Freight Management Initiative

    DOT National Transportation Integrated Search

    2006-04-01

    This report discusses the Electronic Freight Management initiative, a U.S. Department of Transportation (DOT) sponsored research effort that partners with industry to improve the operating efficiencies, safety, and security of goods movement. The EFM...

  3. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship.

    PubMed

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.

  4. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    PubMed Central

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641

  5. Impacts and responses : goods movement after the Northridge Earthquake

    DOT National Transportation Integrated Search

    1998-05-01

    The 1994 Northridge earthquake disrupted goods movement on four major highway routes in : Southern California. This paper examines the impacts of the earthquake on Los Angeles County : trucking firms, and finds that the impact was initially widesprea...

  6. Noninvasive estimation of assist pressure for direct mechanical ventricular actuation

    NASA Astrophysics Data System (ADS)

    An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing

    2018-02-01

    Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.

  7. Comparison of team-focused CPR vs standard CPR in resuscitation from out-of-hospital cardiac arrest: Results from a statewide quality improvement initiative.

    PubMed

    Pearson, David A; Darrell Nelson, R; Monk, Lisa; Tyson, Clark; Jollis, James G; Granger, Christopher B; Corbett, Claire; Garvey, Lee; Runyon, Michael S

    2016-08-01

    Team-focused CPR (TFCPR) is a choreographed approach to cardiopulmonary resuscitation (CPR) with emphasis on minimally interrupted high-quality chest compressions, early defibrillation, discourages endotracheal intubation and encourages use of the bag-valve-mask (BVM) and/or blind-insertion airway device (BIAD) with a ventilation rate of 8-10 breaths/min to minimize hyperventilation. Widespread incorporation of TFCPR in North Carolina (NC) EMS agencies began in 2011, yet its impact on outcomes is unknown. To determine whether TFCPR improves survival with good neurological outcome in out-of-hospital cardiac arrest (OHCA) patients compared to standard CPR. This retrospective cohort analysis of NC EMS agencies reporting data to the Cardiac Arrest Registry for Enhanced Survival (CARES) database from January 2010 to June 2014 included adult, non-traumatic OHCA with presumed cardiac etiology where EMS performed CPR or patient received defibrillation. Exclusions were arrest terminated per EMS policy or DNR. EMS agencies self-reported the TFCPR implementation dates. Patients were categorized as receiving either TFCPR or standard CPR. The primary outcome was good neurologic outcome at time of hospital discharge defined as Pittsburgh Cerebral Performance Category (CPC) 1-2. Of 14,994 OHCAs, 14,129 patients were included for analysis with a mean age 65 (IQR 50-81) years, 61% male, 7.3% with good neurologic outcome, 24.3% with shockable initial rhythm, and 71.5% receiving TFCPR. Of the 3427 (24.3%) with an initial shockable rhythm, 739 (71.9%) had a good neurological outcome. Good neurologic outcome was higher with TFCPR [836 (8.3%, 95%CI 7.7-8.8%)] vs. standard CPR [193 (4.8%, 95%CI 4.2-5.5%)]. Logistic regression controlling for demographic and arrest characteristics revealed TFCPR (OR 1.5), witnessed arrest (OR 4.3), initial shockable rhythm (OR 7.1), and in-hospital hypothermia (OR 3.3) were associated with good neurologic outcome. Mechanical CPR device (OR 0.68), CPR feedback device (OR 0.47), and endotracheal intubation (OR 0.44) were associated with less likelihood for a good neurologic outcome. In our statewide OHCA cohort, TFCPR was associated with improved survival with good neurological outcome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. 77 FR 16215 - Commission Information Collection Activities (FERC-716); Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ...: Title: FERC-716, Good Faith Request for Transmission Service and Response by Transmitting Utility Under... valid good faith request for transmission has been made under section 211 of the FPA. By developing the.... As a result, 18 CFR 2.20 identifies 12 components of a good faith estimate and 5 components of a...

  9. A Reevaluation of Impact Melt Production

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Vickery, A. M.; Melosh, H. J.

    1997-06-01

    The production of melt and vapor is an important process in impact cratering events. Because significant melting and vaporization do not occur in impacts at velocities currently achievable in the laboratory, a detailed study of the production of melt and vapor in planetary impact events is carried out with hydrocode simulations. Sandia's two-dimensional axisymmetric hydrocode CSQ was used to estimate the amount of melt and vapor produced for widely varying initial conditions: 10 to 80 km/sec for impact velocity, 0.2 to 10 km for the projectile radius. Runs with different materials demonstrate the material dependency of the final result. These results should apply to any size projectile (for given impact velocity and material), since the results can be dynamically scaled so long as gravity is unimportant in affecting the early-time flow. In contrast with the assumptions of previous analytical models, a clear difference in shape, impact-size dependence, and depth of burial has been found between the melt regions and the isobaric core. In particular, the depth of the isobaric core is not a good representation of the depth of the melt regions, which form deeper in the target. While near-surface effects cause the computed melt region shapes to look like “squashed spheres” the spherical shape is still a good analytical analog. One of the goals of melt production studies is to find proper scaling laws to infer melt production for any impact event of interest. We tested the point source limit scaling law for melt volumes (μ = 0.55-0.6) proposed by M. D. Bjorkman and K. A. Holsapple (1987,Int. J. Impact Eng.5, 155-163). Our results indicate that the point source limit concept does not apply to melt and vapor production. Rather, melt and vapor production follows an energy scaling law (μ = 0.67), in good agreement with previous results of T. J. Ahrens and J. D. O'Keefe [1977, inImpact and Explosion Cratering(D. J. Roddy, R. O. Pepin, and R. B. Merrill, Eds.), pp. 639-656, Pergamon Press, Elmsford, NY]. Finally we tested the accuracy of our melt production calculation against a terrestrial dataset compiled by R. A. F. Grieve and M. J. Cintala (1992,Meteorities27, 526-538). The hydrocode melt volumes are in good agreement with the estimated volumes of that set of terrestrial craters on crystalline basements. At present there is no good model for melt production from impact craters on sedimentary targets.

  10. 78 FR 66079 - In the Matter of All Licensees Authorized To Manufacture or Initially Transfer Items Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-04

    ... above conditions upon demonstration by the Licensee of good cause. IV In accordance with 10 CFR 2.202... the Order. Where good cause is shown, consideration will be given to extending the time to request a..., DC 20555-0001, and include a statement of good cause for the extension. The answer may consent to...

  11. A Review of Crashworthiness of Composite Aircraft Structures

    DTIC Science & Technology

    1990-02-01

    proprietary, or other reaons . Details on the availability of these publications may be obtained from: Graphics Section, National Research Council Canada...bottoming out, good energy-absorbing and load-limiting ability, good post-crushing structural integrity and no significant load rate sensitivity. In a... good energy absorption capability under compressive loadings. However, under tensile or bending conditions, structural integrity may be lost at initial

  12. PMP Estimations at Sparsely Controlled Andinian Basins and Climate Change Projections

    NASA Astrophysics Data System (ADS)

    Lagos Zúñiga, M. A.; Vargas, X.

    2012-12-01

    Probable Maximum Precipitation (PMP) estimation implies an extensive review of hydrometeorological data and understandig of precipitation formation processes. There exists different methodology processes that apply for their estimations and all of them require a good spatial and temporal representation of storms. The estimation of hydrometeorological PMP on sparsely controlled basins is a difficult task, specially if the studied area has an important orographic effect due to mountains and the mixed precipitation occurrence in the most several storms time period, the main task of this study is to propose and estimate PMP in a sparsely controlled basin, affected by abrupt topography and mixed hidrology basin; also analyzing statystic uncertainties estimations and possible climate changes effects in its estimation. In this study the PMP estimation under statistical and hydrometeorological aproaches (watershed-based and traditional depth area duration analysis) was done in a semi arid zone at Puclaro dam in north Chile. Due to the lack of good spatial meteorological representation at the study zone, we propose a methodology to consider the orographic effects of Los Andes due to orographic effects patterns based in a RCM PRECIS-DGF and annual isoyetal maps. Estimations were validated with precipitation patterns for given winters, considering snow route and rainfall gauges at the preferencial wind direction, finding good results. The estimations are also compared with the highest areal storms in USA, Australia, India and China and with frequency analysis in local rain gauge stations in order to decide about the most adequate approach for the study zone. Climate change projections were evaluated with ECHAM5 GCM model, due to its good quality representation in the seasonality and the magnitude of meteorological variables. Temperature projections, for 2040-2065 period, show that there would be a rise in the catchment contributing area that would lead to an increase of the average liquid precipitation over the basin. Temperature projections would also affect the maximization factors in the calculation of the PMP, increasing it up to 126.6% and 62.5% in scenarios A2 and B1, respectively. These projections are important to be studied due to the implications of PMP in hydrologic design of great hydraulic works as Probable Maximum Flood (PMF). We propose that the methodology presented in this study could be also used in other basins of similar characteristics.

  13. Is My Facility a Good Candidate for CHP?

    EPA Pesticide Factsheets

    Learn if a facility is a good candidate for CHP by answering a list of questions, and access the CHP Spark Spread Estimator, a tool that helps evaluate a prospective CHP system for its potential economic feasibility.

  14. Paired comparison estimates of willingness to accept versus contingent valuation estimates of willingness to pay

    Treesearch

    John B. Loomis; George Peterson; Patricia A. Champ; Thomas C. Brown; Beatrice Lucero

    1998-01-01

    Estimating empirical measures of an individual's willingness to accept that are consistent with conventional economic theory, has proven difficult. The method of paired comparison offers a promising approach to estimate willingness to accept. This method involves having individuals make binary choices between receiving a particular good or a sum of money....

  15. Adaptive rood pattern search for fast block-matching motion estimation.

    PubMed

    Nie, Yao; Ma, Kai-Kuang

    2002-01-01

    In this paper, we propose a novel and simple fast block-matching algorithm (BMA), called adaptive rood pattern search (ARPS), which consists of two sequential search stages: 1) initial search and 2) refined local search. For each macroblock (MB), the initial search is performed only once at the beginning in order to find a good starting point for the follow-up refined local search. By doing so, unnecessary intermediate search and the risk of being trapped into local minimum matching error points could be greatly reduced in long search case. For the initial search stage, an adaptive rood pattern (ARP) is proposed, and the ARP's size is dynamically determined for each MB, based on the available motion vectors (MVs) of the neighboring MBs. In the refined local search stage, a unit-size rood pattern (URP) is exploited repeatedly, and unrestrictedly, until the final MV is found. To further speed up the search, zero-motion prejudgment (ZMP) is incorporated in our method, which is particularly beneficial to those video sequences containing small motion contents. Extensive experiments conducted based on the MPEG-4 Verification Model (VM) encoding platform show that the search speed of our proposed ARPS-ZMP is about two to three times faster than that of the diamond search (DS), and our method even achieves higher peak signal-to-noise ratio (PSNR) particularly for those video sequences containing large and/or complex motion contents.

  16. Marijuana and Alcohol Use as Predictors of Academic Achievement: A Longitudinal Analysis Among Youth in the COMPASS Study.

    PubMed

    Patte, Karen A; Qian, Wei; Leatherdale, Scott T

    2017-05-01

    We tested the effect of initiating marijuana and alcohol use at varying frequencies on academic indices. In a sample of 26,475 grade 9-12 students with at least 2 years of linked longitudinal data from year 1 (Y1: 2012-2013), year 2 (Y2: 2013-2014), and year 3 (Y3: 2014-2015) of the COMPASS study, separate multinomial generalized estimating equations models tested the likelihood of responses to measures of academic goals, engagement, preparedness, and performance when shifting from never using alcohol or marijuana at baseline to using them at varying frequencies at follow -up. Students who began using alcohol or marijuana were less likely to attend class regularly, complete their homework, achieve high marks, and value good grades, relative to their abstaining peers. Changing from abstaining to rare/sporadic-to-weekly drinking or rare/sporadic marijuana use predicted aspirations to continue to all levels of higher education, and initiating weekly marijuana use increased the likelihood of college ambitions, while more regular marijuana use reduced the likelihood of wanting to pursue graduate/professional degrees, over high school. The importance of delaying or preventing substance use is evident in associations with student performance and engagement. The influence on academic goals varied by substance and frequency of initiated use. © 2017, American School Health Association.

  17. Structure-preserving and rank-revealing QR-factorizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; Hansen, P.C.

    1991-11-01

    The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less

  18. Intakes of culinary herbs and spices from a food frequency questionnaire evaluated against 28-days estimated records

    PubMed Central

    2011-01-01

    Background Worldwide, herbs and spices are much used food flavourings. However, little data exist regarding actual dietary intake of culinary herbs and spices. We developed a food frequency questionnaire (FFQ) for the assessment of habitual diet the preceding year, with focus on phytochemical rich food, including herbs and spices. The aim of the present study was to evaluate the intakes of herbs and spices from the FFQ with estimates of intake from another dietary assessment method. Thus we compared the intake estimates from the FFQ with 28 days of estimated records of herb and spice consumption as a reference method. Methods The evaluation study was conducted among 146 free living adults, who filled in the FFQ and 2-4 weeks later carried out 28 days recording of herb and spice consumption. The FFQ included a section with questions about 27 individual culinary herbs and spices, while the records were open ended records for recording of herbs and spice consumption exclusively. Results Our study showed that the FFQ obtained slightly higher estimates of total intake of herbs and spices than the total intake assessed by the Herbs and Spice Records (HSR). The correlation between the two assessment methods with regard to total intake was good (r = 0.5), and the cross-classification suggests that the FFQ may be used to classify subjects according to total herb and spice intake. For the 8 most frequently consumed individual herbs and spices, the FFQ obtained good estimates of median frequency of intake for 2 herbs/spices, while good estimates of portion sizes were obtained for 4 out of 8 herbs/spices. Conclusions Our results suggested that the FFQ was able to give good estimates of frequency of intake and portion sizes on group level for several of the most frequently used herbs and spices. The FFQ was only able to fairly rank subjects according to frequency of intake of the 8 most frequently consumed herbs and spices. Other studies are warranted to further explore the intakes of culinary spices and herbs. PMID:21575177

  19. Intakes of culinary herbs and spices from a food frequency questionnaire evaluated against 28-days estimated records.

    PubMed

    Carlsen, Monica H; Blomhoff, Rune; Andersen, Lene F

    2011-05-16

    Worldwide, herbs and spices are much used food flavourings. However, little data exist regarding actual dietary intake of culinary herbs and spices. We developed a food frequency questionnaire (FFQ) for the assessment of habitual diet the preceding year, with focus on phytochemical rich food, including herbs and spices. The aim of the present study was to evaluate the intakes of herbs and spices from the FFQ with estimates of intake from another dietary assessment method. Thus we compared the intake estimates from the FFQ with 28 days of estimated records of herb and spice consumption as a reference method. The evaluation study was conducted among 146 free living adults, who filled in the FFQ and 2-4 weeks later carried out 28 days recording of herb and spice consumption. The FFQ included a section with questions about 27 individual culinary herbs and spices, while the records were open ended records for recording of herbs and spice consumption exclusively. Our study showed that the FFQ obtained slightly higher estimates of total intake of herbs and spices than the total intake assessed by the Herbs and Spice Records (HSR). The correlation between the two assessment methods with regard to total intake was good (r = 0.5), and the cross-classification suggests that the FFQ may be used to classify subjects according to total herb and spice intake. For the 8 most frequently consumed individual herbs and spices, the FFQ obtained good estimates of median frequency of intake for 2 herbs/spices, while good estimates of portion sizes were obtained for 4 out of 8 herbs/spices. Our results suggested that the FFQ was able to give good estimates of frequency of intake and portion sizes on group level for several of the most frequently used herbs and spices. The FFQ was only able to fairly rank subjects according to frequency of intake of the 8 most frequently consumed herbs and spices. Other studies are warranted to further explore the intakes of culinary spices and herbs.

  20. Infliximab use in Crohn's disease: impact on health care resources in the UK.

    PubMed

    Jewell, Derek P; Satsangi, Jack; Lobo, Alan; Probert, Christopher; Forbes, Alastair; Ghosh, Subrata; Shaffer, Jon; Frenz, Markus; Drummond, Hazel; Troy, Gill; Turner, Sue; Younge, Lisa; Evans, Lyn; Moosa, Mark; Rodgers-Gray, Barry; Buchan, Scot

    2005-10-01

    To quantify the impact of infliximab therapy on health care resource utilization in the UK. A retrospective audit was undertaken at seven centres in the UK, which reviewed patient notes for a period of 6 months before and 6 months after an initial infliximab infusion. Details of hospital admissions, outpatient visits, operations, diagnostic procedures, drug usage, and overall efficacy were collected. Results were compared for the two 6 month study periods. A total of 205 patients (62% female, median age 33 years) with moderate/severe Crohn's disease were audited. The majority of patients had chronic active disease (62%) and most received one infusion initially (72%). Clinicians rated 74% of responses as good to excellent and patients 72%. Most patients had concomitant immunosuppression (pre: 75%, post: 75%). Approximately half of the patients (45%) stopped taking steroids, with a further 34% having a dosage reduction. A fall of 1093 inpatient days was seen (1435 vs. 342) in the 6 months following infliximab administration. There were seven fewer operations, 33 fewer examinations under anaesthetic, and 99 fewer diagnostic procedures. Outpatient visits were similar pre- versus post- (555 vs. 534). The total reduction in direct costs amounted to an estimated pounds 591,006. Three hundred and fifty-three infliximab infusions were administered at an estimated cost of pounds 562,719. Thus, there was a net reduction of pounds 28,287 or pounds 137.98 per patient. Infliximab appears to be a potentially cost effective treatment for selected patients based on the reduced number of inpatient stays, examinations under anaesthetic, and diagnostic procedures over a 6 month period.

  1. Adaptation and psychometric assessment of the Hebrew version of the Recovery Promoting Relationships Scale (RPRS).

    PubMed

    Moran, Galia S; Zisman-Ilani, Yaara; Garber-Epstein, Paula; Roe, David

    2014-03-01

    Recovery is supported by relationships that are characterized by human centeredness, empowerment and a hopeful approach. The Recovery Promoting Relationships Scale (RPRS; Russinova, Rogers, & Ellison, 2006) assesses consumer-provider relationships from the consumer perspective. Here we present the adaptation and psychometric assessment of a Hebrew version of the RPRS. The RPRS was translated to Hebrew (RPRS-Heb) using multiple strategies to assure conceptual soundness. Then 216 mental health consumers were administered the RPRS-Heb as part of a larger project initiative implementing illness management and recovery intervention (IMR) in community settings. Psychometric testing included assessment of the factor structure, reliability, and validity using the Hope Scale, the Working Alliance Inventory, and the Recovery Assessment Scale. The RPRS-Heb factor structure replicated the two factor structures found in the original scale with minor exceptions. Reliability estimates were good: Cronbach's alpha for the total scale was 0.94. An estimate of 0.93 for the Recovery-Promoting Strategies factor, and 0.86 for the Core Relationship. Concurrent validity was confirmed using the Working Alliance Scale (rp = .51, p < .001) and the Hope Scale (rp = .43, p < .001). Criterion validity was examined using the Recovery Assessment Scale (rp = .355, p < .05). The study yielded a 23-item RPRS-Heb version with a psychometrically sound factor structure, satisfactory reliability, and concurrent validity tested against the Hope, Alliance, and Recovery Assessment scales. Outcomes are discussed in the context of the original scale properties and a similar Dutch initiative. The RPRS-Heb can serve as a valuable tool for studying recovery promoting relationships with Hebrew speaking population.

  2. The estimation of the rates of lead exchange between body compartments of smelter employees.

    PubMed

    Behinaein, Sepideh; Chettle, David R; Egden, Lesley M; McNeill, Fiona E; Norman, Geoff; Richard, Norbert; Stever, Susan

    2014-07-01

    The overwhelming proportion of the mass of lead (Pb) is stored in bone and the residence time of Pb in bone is much longer than that in other tissues. Hence, in a metabolic model that we used to solve the differential equations governing the transfer of lead between body compartments, three main compartments are involved: blood (as a transfer compartment), cortical bone (tibia), and trabecular bone (calcaneus). There is a bidirectional connection between blood and the other two compartments. A grid search chi-squared minimization method was used to estimate the initial values of lead transfer rate values from tibia (λTB) and calcaneus (λCB) to blood of 209 smelter employees whose bone lead measurements are available from 1994, 1999, and 2008, and their blood lead level from 1967 onwards (depending on exposure history from once per month to once per year), and then the initial values of kinematic parameters were used to develop multivariate models in order to express λTB and λCB as a function of employment time, age, body lead contents and their interaction. We observed a significant decrease in the transfer rate of lead from bone to blood with increasing body lead contents. The model was tested by calculating the bone lead concentration in 1999 and 2008, and by comparing those values with the measured ones. A good agreement was found between the calculated and measured tibia/calcaneus lead values. Also, we found that the transfer rate of lead from tibia to blood can be expressed solely as a function of cumulative blood lead index.

  3. Volunteering leads to rock-paper-scissors dynamics in a public goods game

    NASA Astrophysics Data System (ADS)

    Semmann, Dirk; Krambeck, Hans-Jürgen; Milinski, Manfred

    2003-09-01

    Collective efforts are a trademark of both insect and human societies. They are achieved through relatedness in the former and unknown mechanisms in the latter. The problem of achieving cooperation among non-kin has been described as the `tragedy of the commons', prophesying the inescapable collapse of many human enterprises. In public goods experiments, initial cooperation usually drops quickly to almost zero. It can be maintained by the opportunity to punish defectors or the need to maintain good reputation. Both schemes require that defectors are identified. Theorists propose that a simple but effective mechanism operates under full anonymity. With optional participation in the public goods game, `loners' (players who do not join the group), defectors and cooperators will coexist through rock-paper-scissors dynamics. Here we show experimentally that volunteering generates these dynamics in public goods games and that manipulating initial conditions can produce each predicted direction. If, by manipulating displayed decisions, it is pretended that defectors have the highest frequency, loners soon become most frequent, as do cooperators after loners and defectors after cooperators. On average, cooperation is perpetuated at a substantial level.

  4. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  5. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  6. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  7. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  8. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  9. The Robustness of Designs for Trials with Nested Data against Incorrect Initial Intracluster Correlation Coefficient Estimates

    ERIC Educational Resources Information Center

    Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.

    2010-01-01

    In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…

  10. RefCNV: Identification of Gene-Based Copy Number Variants Using Whole Exome Sequencing.

    PubMed

    Chang, Lun-Ching; Das, Biswajit; Lih, Chih-Jian; Si, Han; Camalier, Corinne E; McGregor, Paul M; Polley, Eric

    2016-01-01

    With rapid advances in DNA sequencing technologies, whole exome sequencing (WES) has become a popular approach for detecting somatic mutations in oncology studies. The initial intent of WES was to characterize single nucleotide variants, but it was observed that the number of sequencing reads that mapped to a genomic region correlated with the DNA copy number variants (CNVs). We propose a method RefCNV that uses a reference set to estimate the distribution of the coverage for each exon. The construction of the reference set includes an evaluation of the sources of variability in the coverage distribution. We observed that the processing steps had an impact on the coverage distribution. For each exon, we compared the observed coverage with the expected normal coverage. Thresholds for determining CNVs were selected to control the false-positive error rate. RefCNV prediction correlated significantly (r = 0.96-0.86) with CNV measured by digital polymerase chain reaction for MET (7q31), EGFR (7p12), or ERBB2 (17q12) in 13 tumor cell lines. The genome-wide CNV analysis showed a good overall correlation (Spearman's coefficient = 0.82) between RefCNV estimation and publicly available CNV data in Cancer Cell Line Encyclopedia. RefCNV also showed better performance than three other CNV estimation methods in genome-wide CNV analysis.

  11. Using frequency response functions to manage image degradation from equipment vibration in the Daniel K. Inouye Solar Telescope

    NASA Astrophysics Data System (ADS)

    McBride, William R.; McBride, Daniel R.

    2016-08-01

    The Daniel K Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, providing a significant increase in the resolution of solar data available to the scientific community. Vibration mitigation is critical in long focal-length telescopes such as the Inouye Solar Telescope, especially when adaptive optics are employed to correct for atmospheric seeing. For this reason, a vibration error budget has been implemented. Initially, the FRFs for the various mounting points of ancillary equipment were estimated using the finite element analysis (FEA) of the telescope structures. FEA analysis is well documented and understood; the focus of this paper is on the methods involved in estimating a set of experimental (measured) transfer functions of the as-built telescope structure for the purpose of vibration management. Techniques to measure low-frequency single-input-single-output (SISO) frequency response functions (FRF) between vibration source locations and image motion on the focal plane are described. The measurement equipment includes an instrumented inertial-mass shaker capable of operation down to 4 Hz along with seismic accelerometers. The measurement of vibration at frequencies below 10 Hz with good signal-to-noise ratio (SNR) requires several noise reduction techniques including high-performance windows, noise-averaging, tracking filters, and spectral estimation. These signal-processing techniques are described in detail.

  12. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996--Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeul, Vincent R.; Cole, Charles R.; Bergeron, Marcel P.

    2001-08-29

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures andmore » parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty. These results, however, indicate that additional improvements are required to the conceptual model framework. An investigation was initiated at the end of this basalt inverse modeling effort to determine whether facies-based zonation would improve specific yield parameter estimation results (ACM-2). A description of the justification and methodology to develop this zonation is discussed.« less

  13. An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.

    PubMed

    Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong

    2017-06-01

    Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.

  14. Estimating the center of mass of a free-floating body in microgravity.

    PubMed

    Lejeune, L; Casellato, C; Pattyn, N; Neyt, X; Migeotte, P-F

    2013-01-01

    This paper addresses the issue of estimating the position of the center of mass (CoM) of a free-floating object of unknown mass distribution in microgravity using a stereoscopic imaging system. The method presented here is applied to an object of known mass distribution for validation purposes. In the context of a study of 3-dimensional ballistocardiography in microgravity, and the elaboration of a physical model of the cardiovascular adaptation to weightlessness, the hypothesis that the fluid shift towards the head of astronauts induces a significant shift of their CoM needs to be tested. The experiments were conducted during the 57th parabolic flight campaign of the European Space Agency (ESA). At the beginning of the microgravity phase, the object was given an initial translational and rotational velocity. A 3D point cloud corresponding to the object was then generated, to which a motion-based method inspired by rigid body physics was applied. Through simulations, the effects of the centroid-to-CoM distance and the number of frames of the sequence are investigated. In experimental conditions, considering the important residual accelerations of the airplane during the microgravity phases, CoM estimation errors (16 to 76 mm) were consistent with simulations. Overall, our results suggest that the method has a good potential for its later generalization to a free-floating human body in a weightless environment.

  15. Ozone Production and Control Strategies for Southern Taiwan

    NASA Astrophysics Data System (ADS)

    Shiu, C.; Liu, S.; Chang, C.; Chen, J.; Chou, C. C.; Lin, C.

    2006-12-01

    An observation-based modeling (OBM) approach is used to estimate the ozone production efficiency and production rate of O3 (P(O3)) in southern Taiwan. The approach can also provide an indirect estimate of the concentration of OH. Measured concentrations of two aromatic hydrocarbons, i.e. ethylbenzene/m,p-xylene, are used to estimate the degree of photochemical processing and the amounts of photochemically consumed NOx and NMHCs. In addition, a one-dimensional (1d) photochemical model is used to compare with the OBM results. The average ozone production efficiency during the field campaign in Kaohsiung-Pingtung area in Fall 2003 is found to be about 5, comparable to previous works. The relationship of P(O3) with NOx is examined in detail and compared to previous studies. The derived OH concentrations from this approach are in fair agreement with values calculated from the 1d photochemical model. The relationship of total oxidants (e.g. O3+NO2) versus initial NOx and NMHCs suggests that reducing NMHCs are more effective in controlling total oxidants than reducing NOx. For O3 control, reducing NMHC is even more effective than NOx due to the NO titration effect. This observation-based approach provides a good alternative for understanding the production of ozone and formulating ozone control strategy in urban and suburban environment without measurements of peroxy radicals.

  16. Measuring the willingness to pay user fees for interpretive services at a national forest

    NASA Astrophysics Data System (ADS)

    Goldhor-Wilcock, Barbara Ashley

    An understanding of willingness to pay (WTP) for nonmarket environmental goods is useful for planning and policy, but difficult to determine. WTP for interpretive services was investigated using interviews with 361 participants in guided nature tours. Immediately after the tour, participants were asked to state their WIT for the tour. Responses were predominantly 5 (42%), 2 (14%) and 10 (13%). A predetermined amount was added to the open-ended (OE) WTP offer and respondents were asked if they were willing to pay a larger amount. Acceptance of the larger amount depended strongly on the relative increase over the initial WTP. If the increase was smaller than the initial offer, most respondents agreed, whereas if the increment was larger, most did not agree, suggesting that the initial offer was approximately half of the true WTP. The two WTP questions were used to define lower and upper bounds for each respondent's true WTP. A censored interval regression was used to estimate a WTP distribution with mean 11.30 and median $10.00. The median is twice that of the OE WTP, further suggesting that the OE response understated value by 50 percent. The estimated true WTP distribution and the OE WTP distribution have a weak, but statistically significant, dependence on some demographic, travel, and benefit variables, although these relations have negligible practical significance over the observed range of the variables. To evaluate whether the WTP amounts were based on a true economic tradeoff, respondents were asked to explain their WTP responses. For the initial OE question, 38% gave explanations that could be interpreted as an economic tradeoff, whereas 33% gave reasons that were clearly irrelevant. For the second, dichotomous choice (DC), question, 59% gave reasons suggesting a relevant economic judgement. A DC question may provoke apparently relevant answers, regardless of the underlying reasoning (a majority simply said "it was (not) worth it"). The DC reasoning may also be influenced by the preceding OE question, which provides a comparative base. Combining OE and DC questions in a single survey may encourage relevant reasoning, while also helping to identify the true WTP and consumer surplus.

  17. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew; Abe-Ouchi, Ayako; Aschwanden, Andy; Calov, Reinhard; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Golledge, Nicholas R.; Gregory, Jonathan; Greve, Ralf; Humbert, Angelika; Huybrechts, Philippe; Kennedy, Joseph H.; Larour, Eric; Lipscomb, William H.; Le clec'h, Sébastien; Lee, Victoria; Morlighem, Mathieu; Pattyn, Frank; Payne, Antony J.; Rodehacke, Christian; Rückamp, Martin; Saito, Fuyuki; Schlegel, Nicole; Seroussi, Helene; Shepherd, Andrew; Sun, Sainan; van de Wal, Roderik; Ziemen, Florian A.

    2018-04-01

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.

  18. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE PAGES

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; ...

    2018-04-19

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  19. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less

  20. Initial Serum Ferritin Predicts Number of Therapeutic Phlebotomies to Iron Depletion in Secondary Iron Overload

    PubMed Central

    Panch, Sandhya R.; Yau, Yu Ying; West, Kamille; Diggs, Karen; Sweigart, Tamsen; Leitman, Susan F.

    2014-01-01

    Background Therapeutic phlebotomy is increasingly used in patients with transfusional siderosis to mitigate organ injury associated with iron overload (IO). Laboratory response parameters and therapy duration are not well characterized in such patients. Methods We retrospectively evaluated 99 consecutive patients undergoing therapeutic phlebotomy for either transfusional IO (TIO, n=88; 76% had undergone hematopoietic transplantation) or non-transfusional indications (hyperferritinemia or erythrocytosis) (n=11). CBC, serum ferritin (SF), transferrin saturation, and transaminases were measured serially. Phlebotomy goal was an SF< 300 mcg/L. Results Mean SF prior to phlebotomy among TIO and nontransfusional subjects was 3,093 and 396 mcg/L, respectively. Transfusion burden in the TIO group was 94 ± 108 (mean ± SD) RBC units; about half completed therapy with 24 ± 23 phlebotomies (range 1–103). One-third was lost to follow-up. Overall, 15% had mild adverse effects, including headache, nausea, and dizziness, mainly during first phlebotomy. Prior transfusion burden correlated poorly with initial ferritin and total number of phlebotomies to target (NPT) in the TIO group. However, NPT was strongly correlated with initial SF (R2=0.8; p<0.0001) in both TIO and nontransfusional groups. ALT decreased significantly with serial phlebotomy in all groups (mean initial and final values, 61 and 39 U/L; p = 0.03). Conclusions Initial SF but not transfusion burden predicted number of phlebotomies to target in patients with TIO. Despite good treatment tolerance, significant losses to follow-up were noted. Providing patients with an estimated phlebotomy number and follow-up duration, and thus a finite endpoint, may improve compliance. Hepatic function improved with iron off-loading. PMID:25209879

  1. Fracture in Phenolic Impregnated Carbon Ablator

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Chavez-Garcia, Jose; Pham, John

    2013-01-01

    This paper describes the development of a novel technique to understand the failure mechanisms inside thermal protection materials. The focus of this research is on the class of materials known as phenolic impregnated carbon ablators. It has successfully flown on the Stardust spacecraft and is the thermal protection system material chosen for the Mars Science Laboratory and SpaceX Dragon spacecraft. Although it has good thermal properties, structurally, it is a weak material. To understand failure mechanisms in carbon ablators, fracture tests were performed on FiberForm(Registered TradeMark) (precursor), virgin, and charred ablator materials. Several samples of these materials were tested to investigate failure mechanisms at a microstructural scale. Stress-strain data were obtained simultaneously to estimate the tensile strength and toughness. It was observed that cracks initiated and grew in the FiberForm when a critical stress limit was reached such that the carbon fibers separated from the binder. However, both for virgin and charred carbon ablators, crack initiation and growth occurred in the matrix (phenolic) phase. Both virgin and charred carbon ablators showed greater strength values compared with FiberForm samples, confirming that the presence of the porous matrix helps in absorbing the fracture energy.

  2. Manned Spacecraft Requirements for Materials and Processes

    NASA Technical Reports Server (NTRS)

    Vaughn, Timothy P.

    2006-01-01

    A major cause of project failure can be attributed to an emphasized focus on end products and inadequate attention to resolving development risks during the initial phases of a project. The initial phases of a project, which we will call the "study period", are critical to determining project scope and costs, and can make or break most projects. If the requirements are not defined adequately, how can the scope be adequately determined, also how can the costs of the entire project be effectively estimated, and how can the risk of project success be accurately assessed? Using the proper material specifications and standards and incorporating these specifications and standards in the design process should be considered inherently crucial to the technical success of a project as just as importantly, crucial to the cost and schedule success. This paper will intertwine several important aspects or considerations for project success: 1) Characteristics of a "Good Material Requirement"; 2) Linking material requirements to the implementation of "Design for Manufacturing"; techniques and 3) The importance of decomposing materials requirements during the study phase/development phase to mitigate project risk for the maturation of technologies before the building of hardware.

  3. Hemodynamics-Driven Deposition of Intraluminal Thrombus in Abdominal Aortic Aneurysms

    PubMed Central

    Di Achille, P.; Tellides, G.; Humphrey, J.D.

    2016-01-01

    Accumulating evidence suggests that intraluminal thrombus plays many roles in the natural history of abdominal aortic aneurysms. There is, therefore, a pressing need for computational models that can describe and predict the initiation and progression of thrombus in aneurysms. In this paper, we introduce a phenomenological metric for thrombus deposition potential and use hemodynamic simulations based on medical images from six patients to identify best-fit values of the two key model parameters. We then introduce a shape optimization method to predict the associated radial growth of the thrombus into the lumen based on the expectation that thrombus initiation will create a thrombogenic surface, which in turn will promote growth until increasing hemodynamically induced frictional forces prevent any further cell or protein deposition. Comparisons between predicted and actual intraluminal thrombus in the six patient-specific aneurysms suggest that this phenomenological description provides a good first estimate of thrombus deposition. We submit further that, because the biologically active region of the thrombus appears to be confined to a thin luminal layer, predictions of morphology alone may be sufficient to inform fluid-solid-growth models of aneurysmal growth and remodeling. PMID:27569676

  4. Gender, health, and initiation of breastfeeding.

    PubMed

    Colodro-Conde, Lucía; Limiñana-Gras, Rosa M; Sánchez-López, M Pilar; Ordoñana, Juan R

    2015-01-01

    The aim of this study was to explore the associations of health, gender, and motherhood with the decisions about breastfeeding. The sample consisted of 265 pregnant women (mean age: 32.34, SD: 4.01 years) who were recruited in healthcare centers and hospitals in southeast Spain between 2010 and 2011. Mental health was measured by the 12-Item General Health Questionnaire and gender by the Conformity to Feminine Norms Inventory. Women in our sample showed a higher conformity to gender norms than women surveyed in the adaptation of the inventory to the Spanish population (t = 11.25, p < 0.001, effect estimate (Cohen's d) = 0.59). After adjustment for covariates, women who exclusively breastfed did not differ significantly in their conformity to gender norms from those who used partial breastfeeding or bottle feeding. Although good, our expectant mothers had worse mental health than the women aged 15-44 years in the Spanish National Health Survey (t = 2.96, p < 0.001, d = 0.26). Those who partially breastfed had significantly better mental health values. Gender norms were modulators in a model of factors related to initiation of breastfeeding. This study provides information about health and social construction of gender norms.

  5. Model for economic evaluation of high energy gas fracturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engi, D.

    1984-05-01

    The HEGF/NPV model has been developed and adapted for interactive microcomputer calculations of the economic consequences of reservoir stimulation by high energy gas fracturing (HEGF) in naturally fractured formations. This model makes use of three individual models: a model of the stimulated reservoir, a model of the gas flow in this reservoir, and a model of the discounted expected net cash flow (net present value, or NPV) associated with the enhanced gas production. Nominal values of the input parameters, based on observed data and reasonable estimates, are used to calculate the initial expected increase in the average daily rate ofmore » production resulting from the Meigs County HEGF stimulation experiment. Agreement with the observed initial increase in rate is good. On the basis of this calculation, production from the Meigs County Well is not expected to be profitable, but the HEGF/NPV model probably provides conservative results. Furthermore, analyses of the sensitivity of the expected NPV to variations in the values of certain reservoir parameters suggest that the use of HEGF stimulation in somewhat more favorable formations is potentially profitable. 6 references, 4 figures, 3 tables.« less

  6. Localization and Quantification of Trace-gas Fugitive Emissions Using a Portable Optical Spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Eric; Teng, Chu; van Kessel, Theodore

    We present a portable optical spectrometer for fugitive emissions monitoring of methane (CH4). The sensor operation is based on tunable diode laser absorption spectroscopy (TDLAS), using a 5 cm open path design, and targets the 2ν3 R(4) CH4 transition at 6057.1 cm-1 (1651 nm) to avoid cross-talk with common interfering atmospheric constituents. Sensitivity analysis indicates a normalized precision of 2.0 ppmv∙Hz-1/2, corresponding to a noise-equivalent absorption (NEA) of 4.4×10-6 Hz-1/2 and minimum detectible absorption (MDA) coefficient of αmin = 8.8×10-7 cm-1∙Hz-1/2. Our TDLAS sensor is deployed at the Methane Emissions Technology Evaluation Center (METEC) at Colorado State University (CSU) formore » initial demonstration of single-sensor based source localization and quantification of CH4 fugitive emissions. The TDLAS sensor is concurrently deployed with a customized chemi-resistive metal-oxide (MOX) sensor for accuracy benchmarking, demonstrating good visual correlation of the concentration time-series. Initial angle-of-arrival (AOA) results will be shown, and development towards source magnitude estimation will be described.« less

  7. Toughened epoxy resin system and a method thereof

    DOEpatents

    Janke, C.J.; Dorsey, G.F.; Havens, S.J.; Lopata, V.J.

    1998-03-10

    Mixtures of epoxy resins with cationic initiators are curable under high energy ionizing radiation such as electron beam radiation, X-ray radiation, and gamma radiation. The composition of this process consists of an epoxy resin, a cationic initiator such as a diaryliodonium or triarylsulfonium salt of specific anions, and a toughening agent such as a thermoplastic, hydroxy-containing thermoplastic oligomer, epoxy-containing thermoplastic oligomer, reactive flexibilizer, rubber, elastomer, or mixture thereof. Cured compositions have high glass transition temperatures, good mechanical properties, and good toughness. These properties are comparable to those of similar thermally cured epoxies.

  8. Toughened epoxy resin system and a method thereof

    DOEpatents

    Janke, Christopher J.; Dorsey, George F.; Havens, Stephen J.; Lopata, Vincent J.

    1998-01-01

    Mixtures of epoxy resins with cationic initiators are curable under high energy ionizing radiation such as electron beam radiation, X-ray radiation, and gamma radiation. The composition of this process consists of an epoxy resin, a cationic initiator such as a diaryliodonium or triarylsulfonium salt of specific anions, and a toughening agent such as a thermoplastic, hydroxy-containing thermoplastic oligomer, epoxy-containing thermoplastic oligomer, reactive flexibilizer, rubber, elastomer, or mixture thereof. Cured compositions have high glass transition temperatures, good mechanical properties, and good toughness. These properties are comparable to those of similar thermally cured epoxies.

  9. Clean Energy Technology Incubator Initiative Launched in Texas

    Science.gov Websites

    , fuel cells, energy conservation, clean energy-related information technology, end-use consumer products Technology Incubator Initiative Launched in Texas For more information contact: Kerry Masson, 303 information looks like it's a good fit for the clean energy initiative, ATI will help the candidate refine its

  10. Survival and Neurologic Outcome After Out-of-hospital Cardiac Arrest. Results of the Andalusian Out-of-hospital Cardiopulmonary Arrest Registry.

    PubMed

    Rosell Ortiz, Fernando; Mellado Vergel, Francisco; López Messa, Juan Bautista; Fernández Valle, Patricia; Ruiz Montero, María M; Martínez Lara, Manuela; Vergara Pérez, Santiago; Vivar Díaz, Itziar; Caballero García, Auxiliadora; García Alcántara, Ángel; García Del Águila, Javier

    2016-05-01

    There is a paucity of data on prehospital cardiac arrest in Spain. Our aim was to describe the incidence, patient characteristics, and outcomes of out-of-hospital emergency care for this event. We conducted a retrospective analysis of a prospective registry of cardiopulmonary arrest handled by an out-of-hospital emergency service between January 2008 and December 2012. The registry included all patients considered to have a cardiac etiology as the cause of arrest, with a descriptive analysis performed of general patient characteristics and factors associated with good neurologic outcome at hospital discharge. A total of 4072 patients were included, with an estimated incidence of 14.6 events per 100000 inhabitants and year; 72.6% were men. The mean age was 62.0 ± 15.8 years, 58.6% of cases occurred in the home, 25% of patients had initial defibrillable rhythm, 28.8% of patients arrived with a pulse at the hospital (58.3% of the group with defibrillable rhythm), and 10.2% were discharged with good neurologic outcome. The variables associated with this recovery were: witnessed arrest (P=.04), arrest witnessed by emergency team (P=.005), previous life support (P=.04), initial defibrillable rhythm (P=.0001), and performance of a coronary interventional procedure (P=.0001). More than half the cases of sudden cardiac arrest occur at home, and the population was found to be relatively young. Although recovery was satisfactory in 1 out of every 10 patients, there is a need for improvement in the phase prior to emergency team arrival. Coronary interventional procedures had an impact on patient prognosis. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  11. Estimating potential Engelmann spruce seed production on the Fraser Experimental Forest, Colorado

    Treesearch

    Robert R. Alexander; Carleton B. Edminster; Ross K. Watkins

    1986-01-01

    Two good, three heavy, and two bumper spruce seed crops were produced during a 15-year period. There was considerable variability in seed crops, however. Not all locations produced good to bumper seed crops when overall yearly ratings averaged good or better; conversely, some locations produced bumper seed crops in 3 or more years. Mathematical relationships,...

  12. Impacts of Good Practices on Cognitive Development, Learning Orientations, and Graduate Degree Plans during the First Year of College

    ERIC Educational Resources Information Center

    Cruce, Ty M.; Wolniak, Gregory C.; Seifert, Tricia A.; Pascarella, Ernest T.

    2006-01-01

    This study estimated separately the unique effects of three dimensions of good practice and the global effects of a composite measure of good practices on the cognitive development, orientations to learning, and educational aspirations of students during their first year of college. Analyses of longitudinal data from a representative sample of…

  13. Answering the Call for Model-Relevant Observations of Aerosols and Clouds

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.

    2017-01-01

    We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3.We discuss the challenges in making observations that really address deficiencies in models, with some of the more relevant aspects being representativeness of the observations for climatological states, and whether a given model-measurement difference addresses a sampling or a model error.

  14. T Cell Receptor Excision Circle (TREC) Monitoring after Allogeneic Stem Cell Transplantation; a Predictive Marker for Complications and Clinical Outcome

    PubMed Central

    Gaballa, Ahmed; Sundin, Mikael; Stikvoort, Arwen; Abumaree, Muhamed; Uzunel, Mehmet; Sairafi, Darius; Uhlin, Michael

    2016-01-01

    Allogeneic hematopoietic stem cell transplantation (HSCT) is a well-established treatment modality for a variety of malignant diseases as well as for inborn errors of the metabolism or immune system. Regardless of disease origin, good clinical effects are dependent on proper immune reconstitution. T cells are responsible for both the beneficial graft-versus-leukemia (GVL) effect against malignant cells and protection against infections. The immune recovery of T cells relies initially on peripheral expansion of mature cells from the graft and later on the differentiation and maturation from donor-derived hematopoietic stem cells. The formation of new T cells occurs in the thymus and as a byproduct, T cell receptor excision circles (TRECs) are released upon rearrangement of the T cell receptor. Detection of TRECs by PCR is a reliable method for estimating the amount of newly formed T cells in the circulation and, indirectly, for estimating thymic function. Here, we discuss the role of TREC analysis in the prediction of clinical outcome after allogeneic HSCT. Due to the pivotal role of T cell reconstitution we propose that TREC analysis should be included as a key indicator in the post-HSCT follow-up. PMID:27727179

  15. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  16. Estimating Total-Test Scores from Partial Scores in a Matrix Sampling Design.

    ERIC Educational Resources Information Center

    Sachar, Jane; Suppes, Patrick

    1980-01-01

    The present study compared six methods, two of which utilize the content structure of items, to estimate total-test scores using 450 students and 60 items of the 110-item Stanford Mental Arithmetic Test. Three methods yielded fairly good estimates of the total-test score. (Author/RL)

  17. Surface plasmon resonance sensor for antibiotics detection based on photo-initiated polymerization molecularly imprinted array.

    PubMed

    Luo, Qiaohui; Yu, Neng; Shi, Chunfei; Wang, Xiaoping; Wu, Jianmin

    2016-12-01

    A surface plasmon resonance (SPR) sensor combined with nanoscale molecularly imprinted polymer (MIP) film as recognition element was developed for selective detection of the antibiotic ciprofloxacin (CIP). The MIP film on SPR sensor chip was prepared by in situ photo-initiated polymerization method which has the advantages of short polymerization time, controllable thickness and good uniformity. The surface wettability and thickness of MIP film on SPR sensor chip were characterized by static contact angle measurement and stylus profiler. The MIP-SPR sensor exhibited high selectivity, sensitivity and good stability for ciprofloxacin. The imprinting factors of the MIP-SPR sensor to ciprofloxacin and its structural analogue ofloxacin were 2.63 and 3.80, which is much higher than those to azithromycin, dopamine and penicillin. The SPR response had good linear relation with CIP concentration over the range 10 -11 -10 -7 molL -1 . The MIP-SPR sensor also showed good repeatability and stability during cyclic detections. On the basis of the photo-initiated polymerization method, a surface plasmon resonance imaging (SPRi) chip modified with three types of MIP sensing spots was fabricated. The MIPs-SPRi sensor shows different response patterns to ciprofloxacin and azithromycin, revealing the ability to recognize different antibiotic molecules. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.

    2011-05-01

    The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.

  19. Estimating time since infection in early homogeneous HIV-1 samples using a poisson model

    PubMed Central

    2010-01-01

    Background The occurrence of a genetic bottleneck in HIV sexual or mother-to-infant transmission has been well documented. This results in a majority of new infections being homogeneous, i.e., initiated by a single genetic strain. Early after infection, prior to the onset of the host immune response, the viral population grows exponentially. In this simple setting, an approach for estimating evolutionary and demographic parameters based on comparison of diversity measures is a feasible alternative to the existing Bayesian methods (e.g., BEAST), which are instead based on the simulation of genealogies. Results We have devised a web tool that analyzes genetic diversity in acutely infected HIV-1 patients by comparing it to a model of neutral growth. More specifically, we consider a homogeneous infection (i.e., initiated by a unique genetic strain) prior to the onset of host-induced selection, where we can assume a random accumulation of mutations. Previously, we have shown that such a model successfully describes about 80% of sexual HIV-1 transmissions provided the samples are drawn early enough in the infection. Violation of the model is an indicator of either heterogeneous infections or the initiation of selection. Conclusions When the underlying assumptions of our model (homogeneous infection prior to selection and fast exponential growth) are met, we are under a very particular scenario for which we can use a forward approach (instead of backwards in time as provided by coalescent methods). This allows for more computationally efficient methods to derive the time since the most recent common ancestor. Furthermore, the tool performs statistical tests on the Hamming distance frequency distribution, and outputs summary statistics (mean of the best fitting Poisson distribution, goodness of fit p-value, etc). The tool runs within minutes and can readily accommodate the tens of thousands of sequences generated through new ultradeep pyrosequencing technologies. The tool is available on the LANL website. PMID:20973976

  20. 14 CFR 16.21 - Pre-complaint resolution.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... complaint certifies that substantial and reasonable good faith efforts to resolve the disputed matter... person directly and substantially affected by the alleged noncompliance shall initiate and engage in good faith efforts to resolve the disputed matter informally with those individuals or entities believed...

  1. High performance UV and thermal cure hybrid epoxy adhesive

    NASA Astrophysics Data System (ADS)

    Chen, C. F.; Iwasaki, S.; Kanari, M.; Li, B.; Wang, C.; Lu, D. Q.

    2017-06-01

    New type one component UV and thermal curable hybrid epoxy adhesive was successfully developed. The hybrid epoxy adhesive is complete initiator free composition. Neither photo-initiator nor thermal initiator is contained. The hybrid adhesive is mainly composed of special designed liquid bismaleimide, partially acrylated epoxy resin, acrylic monomer, epoxy resin and latent curing agent. Its UV light and thermal cure behavior was studied by FT-IR spectroscopy and FT-Raman spectroscopy. Adhesive samples cured at UV only, thermal only and UV + thermal cure conditions were investigated. By calculated conversion rate of double bond in both acrylic component and maleimide compound, satisfactory light curability of the hybrid epoxy adhesive was confirmed quantitatively. The investigation results also showed that its UV cure components, acrylic and bismalimide, possess good thermal curability too. The initiator free hybrid epoxy adhesive showed satisfactory UV curability, good thermal curability and high adhesion performance.

  2. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  3. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  4. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  5. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  6. Non-contact, Ultrasound-based Indentation Method for Measuring Elastic Properties of Biological Tissues Using Harmonic Motion Imaging (HMI)

    PubMed Central

    Vappou, Jonathan; Hou, Gary Y.; Marquet, Fabrice; Shahmirzadi, Danial; Grondin, Julien; Konofagou, Elisa E.

    2015-01-01

    Noninvasive measurement of mechanical properties of biological tissues in vivo could play a significant role in improving the current understanding of tissue biomechanics. In this study, we propose a method for measuring elastic properties non-invasively by using internal indentation as generated by Harmonic Motion Imaging (HMI). In HMI, an oscillating acoustic radiation force is produced by a focused ultrasound transducer at the focal region, and the resulting displacements are estimated by tracking RF signals acquired by an imaging transducer. In this study, the focal spot region was modeled as a rigid cylindrical piston that exerts an oscillatory, uniform internal force to the underlying tissue. The HMI elastic modulus EHMI was defined as the ratio of the applied force to the axial strain measured by 1D ultrasound imaging. The accuracy and the precision of the EHMI estimate were assessed both numerically and experimentally in polyacrylamide tissue-mimicking phantoms. Initial feasibility of this method in soft tissues was also shown in canine liver specimens in vitro. Very good correlation and agreement was found between the actual Young’s modulus and the HMI modulus in the numerical study (r2>0.99, relative error <10%) and on polyacrylamide gels (r2=0.95, relative error <24%). The average HMI modulus on five liver samples was found to EHMI=2.62±0.41 kPa, compared to EMechTesting=4.2±2.58 kPa measured by rheometry. This study has demonstrated for the first time the initial feasibility of a non-invasive, model-independent method to estimate local elastic properties of biological tissues at a submillimeter scale using an internal indentation-like approach. Ongoing studies include in vitro experiments in a larger number of samples and feasibility testing in in vivo models as well as pathological human specimens. PMID:25776065

  7. Non-contact, ultrasound-based indentation method for measuring elastic properties of biological tissues using harmonic motion imaging (HMI).

    PubMed

    Vappou, Jonathan; Hou, Gary Y; Marquet, Fabrice; Shahmirzadi, Danial; Grondin, Julien; Konofagou, Elisa E

    2015-04-07

    Noninvasive measurement of mechanical properties of biological tissues in vivo could play a significant role in improving the current understanding of tissue biomechanics. In this study, we propose a method for measuring elastic properties non-invasively by using internal indentation as generated by harmonic motion imaging (HMI). In HMI, an oscillating acoustic radiation force is produced by a focused ultrasound transducer at the focal region, and the resulting displacements are estimated by tracking radiofrequency signals acquired by an imaging transducer. In this study, the focal spot region was modeled as a rigid cylindrical piston that exerts an oscillatory, uniform internal force to the underlying tissue. The HMI elastic modulus EHMI was defined as the ratio of the applied force to the axial strain measured by 1D ultrasound imaging. The accuracy and the precision of the EHMI estimate were assessed both numerically and experimentally in polyacrylamide tissue-mimicking phantoms. Initial feasibility of this method in soft tissues was also shown in canine liver specimens in vitro. Very good correlation and agreement was found between the measured Young's modulus and the HMI modulus in the numerical study (r(2) > 0.99, relative error <10%) and on polyacrylamide gels (r(2) = 0.95, relative error <24%). The average HMI modulus on five liver samples was found to EHMI = 2.62  ±  0.41 kPa, compared to EMechTesting = 4.2  ±  2.58 kPa measured by rheometry. This study has demonstrated for the first time the initial feasibility of a non-invasive, model-independent method to estimate local elastic properties of biological tissues at a submillimeter scale using an internal indentation-like approach. Ongoing studies include in vitro experiments in a larger number of samples and feasibility testing in in vivo models as well as pathological human specimens.

  8. Stellar mass functions and implications for a variable IMF

    NASA Astrophysics Data System (ADS)

    Bernardi, M.; Sheth, R. K.; Fischer, J.-L.; Meert, A.; Chae, K.-H.; Dominguez-Sanchez, H.; Huertas-Company, M.; Shankar, F.; Vikram, V.

    2018-03-01

    Spatially resolved kinematics of nearby galaxies has shown that the ratio of dynamical to stellar population-based estimates of the mass of a galaxy (M_{*}^JAM/M_{*}) correlates with σe, the light-weighted velocity dispersion within its half-light radius, if M* is estimated using the same initial mass function (IMF) for all galaxies and the stellar mass-to-light ratio within each galaxy is constant. This correlation may indicate that, in fact, the IMF is more bottom-heavy or dwarf-rich for galaxies with large σ. We use this correlation to estimate a dynamical or IMF-corrected stellar mass, M_{*}^{α _{JAM}}, from M* and σe for a sample of 6 × 105 Sloan Digital Sky Survey (SDSS) galaxies for which spatially resolved kinematics is not available. We also compute the `virial' mass estimate k(n,R) R_e σ _R^2/G, where n is the Sérsic index, in the SDSS and ATLAS3D samples. We show that an n-dependent correction must be applied to the k(n, R) values provided by Prugniel & Simien. Our analysis also shows that the shape of the velocity dispersion profile in the ATLAS3D sample varies weakly with n: (σR/σe) = (R/Re)-γ(n). The resulting stellar mass functions, based on M_*^{α _{JAM}} and the recalibrated virial mass, are in good agreement. Using a Fundamental Plane-based observational proxy for σe produces comparable results. The use of direct measurements for estimating the IMF-dependent stellar mass is prohibitively expensive for a large sample of galaxies. By demonstrating that cheaper proxies are sufficiently accurate, our analysis should enable a more reliable census of the mass in stars, especially at high redshift, at a fraction of the cost. Our results are provided in tabular form.

  9. Estimating water volume stored in the south-eastern Greenland firn aquifer using magnetic-resonance soundings

    NASA Astrophysics Data System (ADS)

    Legchenko, Anatoly; Miège, Clément; Koenig, Lora S.; Forster, Richard R.; Miller, Olivia; Solomon, D. K.; Schmerr, Nicholas; Montgomery, Lynn; Ligtenberg, Stefan; Brucker, Ludovic

    2018-03-01

    Recent observations of the Greenland ice sheet show an increase of the area affected by progressive melt of snow and ice, thus resulting in production of the additional meltwater. In 2011, an important storage of meltwater in the firn has been observed in the S-E Greenland. This water does not freeze during the wintertime and forms a perennial firn aquifer. The aquifer spatial extent has been initially monitored with combined ground and airborne radar observations, but these geophysical techniques are not able to inform us on the amount of meltwater stored at depth. In this study, we use the magnetic resonance soundings (MRS) method for estimating the volume of water stored in the Greenland ice sheet firn and mapping its spatial variability. Our study area covers a firn aquifer along a 16-km E-W transect, ranging between elevations of 1520 and 1760 m. In July 2015 and July 2016, we performed MRS measurements that allow estimating the water volume in the studied area as well as the one-year water volume evolution. Water storage is not homogeneous, fluctuating between 0.2 and 2 m3/m2, and contains discontinuities in the hydrodynamic properties. We estimate an average volume of water stored in the firn in 2016 to be 0.76 m3/m2, which corresponds to a 0.76-m-thick layer of bulk water. MRS monitoring reveals that from April 2015 to July 2016 the volume of water stored at the location of our transect increases by about 36%. We found MRS-estimated depth to water in a good agreement with that obtained with the ground penetrating radar (GPR).

  10. American Society of Interventional Pain Physicians (ASIPP) guidelines for responsible opioid prescribing in chronic non-cancer pain: Part 2--guidance.

    PubMed

    Manchikanti, Laxmaiah; Abdi, Salahadin; Atluri, Sairam; Balog, Carl C; Benyamin, Ramsin M; Boswell, Mark V; Brown, Keith R; Bruel, Brian M; Bryce, David A; Burks, Patricia A; Burton, Allen W; Calodney, Aaron K; Caraway, David L; Cash, Kimberly A; Christo, Paul J; Damron, Kim S; Datta, Sukdeb; Deer, Timothy R; Diwan, Sudhir; Eriator, Ike; Falco, Frank J E; Fellows, Bert; Geffert, Stephanie; Gharibo, Christopher G; Glaser, Scott E; Grider, Jay S; Hameed, Haroon; Hameed, Mariam; Hansen, Hans; Harned, Michael E; Hayek, Salim M; Helm, Standiford; Hirsch, Joshua A; Janata, Jeffrey W; Kaye, Alan D; Kaye, Adam M; Kloth, David S; Koyyalagunta, Dhanalakshmi; Lee, Marion; Malla, Yogesh; Manchikanti, Kavita N; McManus, Carla D; Pampati, Vidyasagar; Parr, Allan T; Pasupuleti, Ramarao; Patel, Vikram B; Sehgal, Nalini; Silverman, Sanford M; Singh, Vijay; Smith, Howard S; Snook, Lee T; Solanki, Daneshvari R; Tracy, Deborah H; Vallejo, Ricardo; Wargo, Bradley W

    2012-07-01

    Part 2 of the guidelines on responsible opioid prescribing provides the following recommendations for initiating and maintaining chronic opioid therapy of 90 days or longer. 1. A) Comprehensive assessment and documentation is recommended before initiating opioid therapy, including documentation of comprehensive history, general medical condition, psychosocial history, psychiatric status, and substance use history. ( good) B) Despite limited evidence for reliability and accuracy, screening for opioid use is recommended, as it will identify opioid abusers and reduce opioid abuse. ( limited) C) Prescription monitoring programs must be implemented, as they provide data on patterns of prescription usage, reduce prescription drug abuse or doctor shopping. ( good to fair) D) Urine drug testing (UDT) must be implemented from initiation along with subsequent adherence monitoring to decrease prescription drug abuse or illicit drug use when patients are in chronic pain management therapy. ( good) 2. A) Establish appropriate physical diagnosis and psychological diagnosis if available prior to initiating opioid therapy. ( good) B) Caution must be exercised in ordering various imaging and other evaluations, interpretation and communication with the patient, to avoid increased fear, activity restriction, requests for increased opioids, and maladaptive behaviors. ( good) C) Stratify patients into one of the 3 risk categories - low, medium, or high risk. D) A pain management consultation, may assist non-pain physicians, if high-dose opioid therapy is utilized. ( fair) 3. Essential to establish medical necessity prior to initiation or maintenance of opioid therapy. ( good) 4. Establish treatment goals of opioid therapy with regard to pain relief and improvement in function. ( good) 5. A) Long-acting opioids in high doses are recommended only in specific circumstances with severe intractable pain that is not amenable to short-acting or moderate doses of long-acting opioids, as there is no significant difference between long-acting and short-acting opioids for their effectiveness or adverse effects. ( fair) B) The relative and absolute contraindications to opioid use in chronic non-cancer pain must be evaluated including respiratory instability, acute psychiatric instability, uncontrolled suicide risk, active or history of alcohol or substance abuse, confirmed allergy to opioid agents, coadministration of drugs capable of inducing life-limiting drug interaction, concomitant use of benzodiazepines, active diversion of controlled substances, and concomitant use of heavy doses of central nervous system depressants. ( fair to limited) 6. A robust agreement which is followed by all parties is essential in initiating and maintaining opioid therapy as such agreements reduce overuse, misuse, abuse, and diversion. ( fair) 7. A) Once medical necessity is established, opioid therapy may be initiated with low doses and short-acting drugs with appropriate monitoring to provide effective relief and avoid side effects. ( fair for short-term effectiveness, limited for long-term effectiveness) B) Up to 40 mg of morphine equivalent is considered as low dose, 41 to 90 mg of morphine equivalent as a moderate dose, and greater than 91 mg of morphine equivalence as high dose. ( fair) C) In reference to long-acting opioids, titration must be carried out with caution and overdose and misuse must be avoided. ( good) 8. A) Methadone is recommended for use in late stages after failure of other opioid therapy and only by clinicians with specific training in the risks and uses. ( limited) B) Monitoring recommendation for methadone prescription is that an electrocardiogram should be obtained prior to initiation, at 30 days and yearly thereafter. ( fair) 9. In order to reduce prescription drug abuse and doctor shopping, adherence monitoring by UDT and PMDPs provide evidence that is essential to the identification of those patients who are non-compliant or abusing prescription drugs or illicit drugs. ( fair) 10. Constipation must be closely monitored and a bowel regimen be initiated as soon as deemed necessary. ( good) 11. Chronic opioid therapy may be continued, with continuous adherence monitoring, in well-selected populations, in conjunction with or after failure of other modalities of treatments with improvement in physical and functional status and minimal adverse effects. ( fair). The guidelines are based on the best available evidence and do not constitute inflexible treatment recommendations. Due to the changing body of evidence, this document is not intended to be a "standard of care."

  11. Estimating the costs of human space exploration

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.

    1994-01-01

    The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.

  12. Convergence of the Full Compressible Navier-Stokes-Maxwell System to the Incompressible Magnetohydrodynamic Equations in a Bounded Domain II: Global Existence Case

    NASA Astrophysics Data System (ADS)

    Fan, Jishan; Li, Fucai; Nakamura, Gen

    2018-06-01

    In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.

  13. Data assimialation for real-time prediction and reanalysis

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Kellerman, A. C.; Podladchikova, T.; Kondrashov, D. A.; Ghil, M.

    2015-12-01

    We discuss the how data assimilation can be used for the analysis of individual satellite anomalies, development of long-term evolution reconstruction that can be used for the specification models, and use of data assimilation to improve the now-casting and focusing of the radiation belts. We also discuss advanced data assimilation methods such as parameter estimation and smoothing.The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. Real-time prediction framework operating on our web site based on GOES, RBSP A, B and ACE data and 3D VERB is presented and discussed. In this paper we present a number of application of the data assimilation with the VERB 3D code. 1) Model with data assimilation allows to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics based VERB code in an optimal way. We illustrate how we use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore the model is as good as the initial conditions that it uses. To produce the best possible initial condition data from different sources ( GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation as described above. The resulting initial condition does not have gaps. That allows us to make a more accurate predictions.

  14. The Psychosocial Assessment of Candidates for Transplantation: A Cohort Study of its Association With Survival Among Lung Transplant Recipients.

    PubMed

    Hitschfeld, Mario J; Schneekloth, Terry D; Kennedy, Cassie C; Rummans, Teresa A; Niazi, Shehzad K; Vasquez, Adriana R; Geske, Jennifer R; Petterson, Tanya M; Kremers, Walter K; Jowsey-Gregoire, Sheila G

    2016-01-01

    The United Network for Organ Sharing mandates a psychosocial assessment of transplant candidates before listing. A quantified measure for determining transplant candidacy is the Psychosocial Assessment of Candidates for Transplant (PACT) scale. This instrument's predictive value for survival has not been rigorously evaluated among lung transplantation recipients. We reviewed medical records of all patients who underwent lung transplantation at Mayo Clinic, Rochester from 2000-2012. A transplant psychiatrist had assessed lung transplant candidates for psychosocial risk with the PACT scale. Recipients were divided into high- and low psychosocial risk cohorts using a PACT score cutoff of 2. The main outcome variable was posttransplant survival. Mortality was analyzed using the Kaplan-Meier estimator and Cox proportional hazard models. This study included 110 lung recipients: 57 (51.8%) were females, 101 (91.8%) Whites, mean age: 56.4 years. Further, 7 (6.4%) recipients received an initial PACT score <2 (poor or borderline candidates) and later achieved a higher score, allowing transplant listing; 103 (93.6%) received initial scores ≥2 (acceptable, good or great candidates). An initial PACT score < 2 was modestly associated with higher mortality (adjusted hazard ratio = 2.73, p = 0.04). Lung transplant recipients who initially received a low score on the PACT scale, reflecting poor or borderline psychosocial candidacy, experienced greater likelihood of mortality. This primary finding suggests that the psychosocial assessment, as measured by the PACT scale, may provide additional mortality risk stratification for lung transplant candidates. Copyright © 2016 The Academy of Psychosomatic Medicine. Published by Elsevier Inc. All rights reserved.

  15. Full load estimation of an offshore wind turbine based on SCADA and accelerometer data

    NASA Astrophysics Data System (ADS)

    Noppe, N.; Iliopoulos, A.; Weijtjens, W.; Devriendt, C.

    2016-09-01

    As offshore wind farms (OWFs) grow older, the optimal use of the actual fatigue lifetime of an offshore wind turbine (OWT) and predominantly its foundation will get more important. In case of OWTs, both quasi-static wind/thrust loads and dynamic loads, as induced by turbulence, waves and the turbine's dynamics, contribute to its fatigue life progression. To estimate the remaining useful life of an OWT, the stresses acting on the fatigue critical locations within the structure should be monitored continuously. Unfortunately, in case of the most common monopile foundations these locations are often situated below sea-level and near the mud line and thus difficult or even impossible to access for existing OWTs. Actual strain measurements taken at accessible locations above the sea level show a correlation between thrust load and several SCADA parameters. Therefore a model is created to estimate the thrust load using SCADA data and strain measurements. Afterwards the thrust load acting on the OWT is estimated using the created model and SCADA data only. From this model the quasi static loads on the foundation can be estimated over the lifetime of the OWT. To estimate the contribution of the dynamic loads a modal decomposition and expansion based virtual sensing technique is applied. This method only uses acceleration measurements recorded at accessible locations on the tower. Superimposing both contributions leads to a so-called multi-band virtual sensing. The result is a method that allows to estimate the strain history at any location on the foundation and thus the full load, being a combination of both quasi-static and dynamic loads, acting on the entire structure. This approach is validated using data from an operating Belgian OWF. An initial good match between measured and predicted strains for a short period of time proofs the concept.

  16. Estimation of regression laws for ground motion parameters using as case of study the Amatrice earthquake

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni

    2017-04-01

    The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

  17. Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis

    PubMed Central

    Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel

    2012-01-01

    This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867

  18. Back to the future: estimating pre-injury brain volume in patients with traumatic brain injury.

    PubMed

    Ross, David E; Ochs, Alfred L; D Zannoni, Megan; Seabaugh, Jan M

    2014-11-15

    A recent meta-analysis by Hedman et al. allows for accurate estimation of brain volume changes throughout the life span. Additionally, Tate et al. showed that intracranial volume at a later point in life can be used to estimate reliably brain volume at an earlier point in life. These advancements were combined to create a model which allowed the estimation of brain volume just prior to injury in a group of patients with mild or moderate traumatic brain injury (TBI). This volume estimation model was used in combination with actual measurements of brain volume to test hypotheses about progressive brain volume changes in the patients. Twenty six patients with mild or moderate TBI were compared to 20 normal control subjects. NeuroQuant® was used to measure brain MRI volume. Brain volume after the injury (from MRI scans performed at t1 and t2) was compared to brain volume just before the injury (volume estimation at t0) using longitudinal designs. Groups were compared with respect to volume changes in whole brain parenchyma (WBP) and its 3 major subdivisions: cortical gray matter (GM), cerebral white matter (CWM) and subcortical nuclei+infratentorial regions (SCN+IFT). Using the normal control data, the volume estimation model was tested by comparing measured brain volume to estimated brain volume; reliability ranged from good to excellent. During the initial phase after injury (t0-t1), the TBI patients had abnormally rapid atrophy of WBP and CWM, and abnormally rapid enlargement of SCN+IFT. Rates of volume change during t0-t1 correlated with cross-sectional measures of volume change at t1, supporting the internal reliability of the volume estimation model. A logistic regression analysis using the volume change data produced a function which perfectly predicted group membership (TBI patients vs. normal control subjects). During the first few months after injury, patients with mild or moderate TBI have rapid atrophy of WBP and CWM, and rapid enlargement of SCN+IFT. The magnitude and pattern of the changes in volume may allow for the eventual development of diagnostic tools based on the volume estimation approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Curvature estimation for multilayer hinged structures with initial strains

    NASA Astrophysics Data System (ADS)

    Nikishkov, G. P.

    2003-10-01

    Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.

  20. Evaluation of Alternative Conceptual Models Using Interdisciplinary Information: An Application in Shallow Groundwater Recharge and Discharge

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Bajcsy, P.; Valocchi, A. J.; Kim, C.; Wang, J.

    2007-12-01

    Natural systems are complex, thus extensive data are needed for their characterization. However, data acquisition is expensive; consequently we develop models using sparse, uncertain information. When all uncertainties in the system are considered, the number of alternative conceptual models is large. Traditionally, the development of a conceptual model has relied on subjective professional judgment. Good judgment is based on experience in coordinating and understanding auxiliary information which is correlated to the model but difficult to be quantified into the mathematical model. For example, groundwater recharge and discharge (R&D) processes are known to relate to multiple information sources such as soil type, river and lake location, irrigation patterns and land use. Although hydrologists have been trying to understand and model the interaction between each of these information sources and R&D processes, it is extremely difficult to quantify their correlations using a universal approach due to the complexity of the processes, the spatiotemporal distribution and uncertainty. There is currently no single method capable of estimating R&D rates and patterns for all practical applications. Chamberlin (1890) recommended use of "multiple working hypotheses" (alternative conceptual models) for rapid advancement in understanding of applied and theoretical problems. Therefore, cross analyzing R&D rates and patterns from various estimation methods and related field information will likely be superior to using only a single estimation method. We have developed the Pattern Recognition Utility (PRU), to help GIS users recognize spatial patterns from noisy 2D image. This GIS plug-in utility has been applied to help hydrogeologists establish alternative R&D conceptual models in a more efficient way than conventional methods. The PRU uses numerical methods and image processing algorithms to estimate and visualize shallow R&D patterns and rates. It can provide a fast initial estimate prior to planning labor intensive and time consuming field R&D measurements. Furthermore, the Spatial Pattern 2 Learn (SP2L) was developed to cross analyze results from the PRU with ancillary field information, such as land coverage, soil type, topographic maps and previous estimates. The learning process of SP2L cross examines each initially recognized R&D pattern with the ancillary spatial dataset, and then calculates a quantifiable reliability index for each R&D map using a supervised machine learning technique called decision tree. This JAVA based software package is capable of generating alternative R&D maps if the user decides to apply certain conditions recognized by the learning process. The reliability indices from SP2L will improve the traditionally subjective approach to initiating conceptual models by providing objectively quantifiable conceptual bases for further probabilistic and uncertainty analyses. Both the PRU and SP2L have been designed to be user-friendly and universal utilities for pattern recognition and learning to improve model predictions from sparse measurements by computer-assisted integration of spatially dense geospatial image data and machine learning of model dependencies.

  1. Two-Dimensional Echocardiography Estimates of Fetal Ventricular Mass throughout Gestation.

    PubMed

    Aye, Christina Y L; Lewandowski, Adam James; Ohuma, Eric O; Upton, Ross; Packham, Alice; Kenworthy, Yvonne; Roseman, Fenella; Norris, Tess; Molloholli, Malid; Wanyonyi, Sikolia; Papageorghiou, Aris T; Leeson, Paul

    2017-08-12

    Two-dimensional (2D) ultrasound quality has improved in recent years. Quantification of cardiac dimensions is important to screen and monitor certain fetal conditions. We assessed the feasibility and reproducibility of fetal ventricular measures using 2D echocardiography, reported normal ranges in our cohort, and compared estimates to other modalities. Mass and end-diastolic volume were estimated by manual contouring in the four-chamber view using TomTec Image Arena 4.6 in end diastole. Nomograms were created from smoothed centiles of measures, constructed using fractional polynomials after log transformation. The results were compared to those of previous studies using other modalities. A total of 294 scans from 146 fetuses from 15+0 to 41+6 weeks of gestation were included. Seven percent of scans were unanalysable and intraobserver variability was good (intraclass correlation coefficients for left and right ventricular mass 0.97 [0.87-0.99] and 0.99 [0.95-1.0], respectively). Mass and volume increased exponentially, showing good agreement with 3D mass estimates up to 28 weeks of gestation, after which our measurements were in better agreement with neonatal cardiac magnetic resonance imaging. There was good agreement with 4D volume estimates for the left ventricle. Current state-of-the-art 2D echocardiography platforms provide accurate, feasible, and reproducible fetal ventricular measures across gestation, and in certain circumstances may be the modality of choice. © 2017 S. Karger AG, Basel.

  2. Analysis of compaction initiation in human embryos by using time-lapse cinematography.

    PubMed

    Iwata, Kyoko; Yumoto, Keitaro; Sugishima, Minako; Mizoguchi, Chizuru; Kai, Yoshiteru; Iba, Yumiko; Mio, Yasuyuki

    2014-04-01

    To analyze the initiation of compaction in human embryos in vitro by using time-lapse cinematography (TLC), with the goal of determining the precise timing of compaction and clarifying the morphological changes underlying the compaction process. One hundred and fifteen embryos donated by couples with no further need for embryo-transfer were used in this study. Donated embryos were thawed and processed, and then their morphological behavior during the initiation of compaction was dynamically observed via time-lapse cinematography (TLC) for 5 days. Although the initiation of compaction occurred throughout the period from the 4-cell to 16-cell stage, 99 (86.1 %) embryos initiated compaction at the 8-cell stage or later, with initiation at the 8-cell stage being most frequent (22.6 %). Of these 99 embryos, 49.5 % developed into good-quality blastocysts. In contrast, of the 16 (13.9 %) embryos that initiated compaction prior to the 8-cell stage, only 18.8 % developed into good-quality blastocysts. Embryos that initiated compaction before the 8-cell stage showed significantly higher numbers of multinucleated blastomeres, due to asynchronism in nuclear division at the third mitotic division resulting from cytokinetic failure. The initiation of compaction primarily occurs at the third mitotic division or later in human embryos. Embryos that initiate compaction before the 8-cell stage are usually associated with aberrant embryonic development (i.e., cytokinetic failure accompanied by karyokinesis).

  3. Sharing Good Practices: Teenage Girls, Sport, and Physical Activities.

    ERIC Educational Resources Information Center

    Vescio, Johanna A.; Crosswhite, Janice J.

    2002-01-01

    Investigated initiatives to increase teenage girls' participation in sport and physical activities, examining good practice case studies from Australia. Surveys of national, state, and regional sporting organizations and various community and school organizations (including culturally diverse girls and girls with disabilities) highlighted three…

  4. Exploring the concept of quality care for the person who is dying.

    PubMed

    Stefanou, Nichola; Faircloth, Sandra

    2010-12-01

    The concept of good quality care for the patient who is dying is diverse and complex. Many of the actions that are being taken to increase the quality of care of the dying patient are based around outcome, uniformity of service and standardization of process. There are two main areas that are referred to when dealing with care of the dying patient; end-of-life care and palliative care. High quality end-of-life care is increasingly recognized as an ethical obligation of health-care providers, clinicians and organizations, and yet there appears little evidence from the patients' perspective. There are many national and local initiatives taking place to improve the quality of care people receive towards the end of their life. This being said initiatives alone will not achieve good quality care and deliver good patient experiences. Only clinicians working at the front line can truly influence the way in which quality is improved and good experiences delivered.

  5. Sediment transport simulation in an armoured stream

    USGS Publications Warehouse

    Milhous, Robert T.; Bradley, Jeffrey B.; Loeffler, Cindy L.

    1986-01-01

    Improved methods of calculating bed material stability and transport must be developed for a gravel bed stream having an armoured surface in order to use the HEC-6 model to examine channel change. Good possibilities exist for use of a two layer model based on the Schoklitsch and the Einstein-Brown transport equations. In Einstein-Brown the D35 of the armour is used for stabilities and the D50 of the bed (sub-surface) is used for transport. Data on the armour and sub-surface size distribution needs to be obtained as part of a bed material study in a gravel bed river; a "shovel" sample is not adequate. The Meyer-Peter, Muller equation should not be applied to a gravel bed stream with an armoured surface to estimate the initiation of transport or for calculation of transport at low effective bed shear stress.

  6. Biomechanical monitoring of healing bone based on acoustic emission technology.

    PubMed

    Hirasawa, Yasusuke; Takai, Shinro; Kim, Wook-Cheol; Takenaka, Nobuyuki; Yoshino, Nobuyuki; Watanabe, Yoshinobu

    2002-09-01

    Acoustic emission testing is a well-established method for assessment of the mechanical integrity of general construction projects. The purpose of the current study was to investigate the usefulness of acoustic emission technology in monitoring the yield strength of healing callus during external fixation. Thirty-five patients with 39 long bones treated with external fixation were evaluated for fracture healing by monitoring load for the initiation of acoustic emission signal (yield strength) under axial loading. The major criteria for functional bone union based on acoustic emission testing were (1) no acoustic emission signal on full weightbearing, and (2) a higher estimated strength than body weight. The yield strength monitored by acoustic emission testing increased with the time of healing. The external fixator could be removed safely and successfully in 97% of the patients. Thus, the acoustic emission method has good potential as a reliable method for monitoring the mechanical status of healing bone.

  7. A computational approach to the relationship between radiation induced double strand breaks and translocations

    NASA Technical Reports Server (NTRS)

    Holley, W. R.; Chatterjee, A.

    1994-01-01

    A theoretical framework is presented which provides a quantitative analysis of radiation induced translocations between the ab1 oncogene on CH9q34 and a breakpoint cluster region, bcr, on CH 22q11. Such translocations are associated frequently with chronic myelogenous leukemia. The theory is based on the assumption that incorrect or unfaithful rejoining of initial double strand breaks produced concurrently within the 200 kbp intron region upstream of the second abl exon, and the 16.5 kbp region between bcr exon 2 and exon 6 interact with each other, resulting in a fusion gene. for an x-ray dose of 100 Gy, there is good agreement between the theoretical estimate and the one available experimental result. The theory has been extended to provide dose response curves for these types of translocations. These curves are quadratic at low doses and become linear at high doses.

  8. Influence of external mass transfer limitation on apparent kinetic parameters of penicillin G acylase immobilized on nonporous ultrafine silica particles.

    PubMed

    Kheirolomoom, Azadeh; Khorasheh, Farhad; Fazelinia, Hossein

    2002-01-01

    Immobilization of enzymes on nonporous supports provides a suitable model for investigating the effect of external mass transfer limitation on the reaction rate in the absence of internal diffusional resistance. In this study, deacylation of penicillin G was investigated using penicillin acylase immobilized on ultrafine silica particles. Kinetic studies were performed within the low-substrate-concentration region, where the external mass transfer limitation becomes significant. To predict the apparent kinetic parameters and the overall effectiveness factor, knowledge of the external mass transfer coefficient, k(L)a, is necessary. Although various correlations exist for estimation of k(L)a, in this study, an optimization scheme was utilized to obtain this coefficient. Using the optimum values of k(L)a, the initial reaction rates were predicted and found to be in good agreement with the experimental data.

  9. A vortex-filament and core model for wings with edge vortex separation

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Lan, C. E.

    1981-01-01

    A method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semiempirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; the two vortex core system applied to the double delta and strake wing produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and the computer time for the present method is about two thirds of that of Mehrotra's method.

  10. ProFound: Source Extraction and Application to Modern Survey Data

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.

    2018-04-01

    ProFound detects sources in noisy images, generates segmentation maps identifying the pixels belonging to each source, and measures statistics like flux, size, and ellipticity. These inputs are key requirements of ProFit (ascl:1612.004), our galaxy profiling package; these two packages used in unison semi-automatically profile large samples of galaxies. The key novel feature introduced in ProFound is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. ProFound offers good initial parameter estimation for ProFit, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the ProFound and ProFit pipeline.

  11. Evaluation of ERTS data for certain oceanographic uses. [sunglint, algal bloom, water temperature, upwelling, and turbidity of Great Lakes waters

    NASA Technical Reports Server (NTRS)

    Strong, A. E. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. (1) Sunglint effects over water can be expected in ERTS-1 images whenever solar elevations exceed 55 deg. (2) Upwellings were viewed coincidently by ERTS-1 and NOAA-2 in Lake Michigan on two occasions during August 1973. (3) A large oil slick was identified 100 km off the Maryland coast in the Atlantic Ocean. Volume of the oil was estimated to be least 200,000 liters (50,000 gallons). (4) ERTS-1 observations of turbidity patterns in Lake St. Clair provide circulation information that correlates well with physical model studies made 10 years ago. (5) Good correlation has been established between ERTS-1 water color densities and NOAA-2 thermal infrared surface temperature measurements. Initial comparisons have been made in Lake Erie during March 1973.

  12. Flares, ejections, proton events

    NASA Astrophysics Data System (ADS)

    Belov, A. V.

    2017-11-01

    Statistical analysis is performed for the relationship of coronal mass ejections (CMEs) and X-ray flares with the fluxes of solar protons with energies >10 and >100 MeV observed near the Earth. The basis for this analysis was the events that took place in 1976-2015, for which there are reliable observations of X-ray flares on GOES satellites and CME observations with SOHO/LASCO coronagraphs. A fairly good correlation has been revealed between the magnitude of proton enhancements and the power and duration of flares, as well as the initial CME speed. The statistics do not give a clear advantage either to CMEs or the flares concerning their relation with proton events, but the characteristics of the flares and ejections complement each other well and are reasonable to use together in the forecast models. Numerical dependences are obtained that allow estimation of the proton fluxes to the Earth expected from solar observations; possibilities for improving the model are discussed.

  13. Model updating strategy for structures with localised nonlinearities using frequency response measurements

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.

    2018-02-01

    This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.

  14. Sensory acceptability of squid rings gamma irradiated for shelf-life extension

    NASA Astrophysics Data System (ADS)

    Tomac, Alejandra; Cova, María C.; Narvaiz, Patricia; Yeannes, María I.

    2017-01-01

    The feasibility of extending the shelf-life of a squid product by gamma irradiation was analyzed. Illex argentinus rings were irradiated at 4 and 8 kGy; and stored at 4±1 °C during 77 days. No mesophilic bacteria, enterobacteriaceae and coliforms were detected in irradiated rings during storage. Psychrotrophic bacteria were significantly reduced by irradiation; their counts were fitted to a growth model which was further used for shelf-life estimations: 3 and 27 days for 0 and 4 kGy, respectively. Initially, non-irradiated as well as irradiated rings had very good sensory scores. The overall acceptability of 4 and 8 kGy rings did not decrease during 27 and 64 days, respectively, while control samples spoiled after 3 days. A radiation dose range for squid rings preservation was defined, which attained the technological shelf-life extension objective, without impairing sensory quality.

  15. A vortex-filament and core model for wings with edge vortex separation

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Lan, C. E.

    1982-01-01

    A vortex filament-vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semi-empirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: (1) the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; (2) the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; (3) the two vortex core system applied to the double delta and strake wings produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and (4) the computer time for the present method is about two thirds of that of Mehrotra's method.

  16. Magma ocean formation due to giant impacts

    NASA Technical Reports Server (NTRS)

    Tonks, W. B.; Melosh, H. J.

    1993-01-01

    The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.

  17. Effect of initial moisture content on the in-vessel composting under air pressure of organic fraction of municipal solid waste in Morocco.

    PubMed

    Makan, Abdelhadi; Assobhei, Omar; Mountadar, Mohammed

    2013-01-03

    This study aimed to evaluate the effect of initial moisture content on the in-vessel composting under air pressure of organic fraction of municipal solid waste in Morocco in terms of internal temperature, produced gases quantity, organic matter conversion rate, and the quality of the final composts.For this purpose, in-vessel bioreactor was designed and used to evaluate both appropriate initial air pressure and appropriate initial moisture content for the composting process. Moreover, 5 experiments were carried out within initial moisture content of 55%, 65%, 70%, 75% and 85%. The initial air pressure and the initial moisture content of the mixture showed a significant effect on the aerobic composting. The experimental results demonstrated that for composting organic waste, relatively high moisture contents are better at achieving higher temperatures and retaining them for longer times.This study suggested that an initial moisture content of around 75%, under 0.6 bar, can be considered as being suitable for efficient composting of organic fraction of municipal solid waste. These last conditions, allowed maximum value of temperature and final composting product with good physicochemical properties as well as higher organic matter degradation and higher gas production. Moreover, final compost obtained showed good maturity levels and can be used for agricultural applications.

  18. A method for the estimate of the wall diffusion for non-axisymmetric fields using rotating external fields

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.

    2013-08-01

    A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.

  19. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  20. Massive yet grossly underestimated global costs of invasive insects

    PubMed Central

    Bradshaw, Corey J. A.; Leroy, Boris; Bellard, Céline; Roiz, David; Albert, Céline; Fournier, Alice; Barbet-Massin, Morgane; Salles, Jean-Michel; Simard, Frédéric; Courchamp, Franck

    2016-01-01

    Insects have presented human society with some of its greatest development challenges by spreading diseases, consuming crops and damaging infrastructure. Despite the massive human and financial toll of invasive insects, cost estimates of their impacts remain sporadic, spatially incomplete and of questionable quality. Here we compile a comprehensive database of economic costs of invasive insects. Taking all reported goods and service estimates, invasive insects cost a minimum of US$70.0 billion per year globally, while associated health costs exceed US$6.9 billion per year. Total costs rise as the number of estimate increases, although many of the worst costs have already been estimated (especially those related to human health). A lack of dedicated studies, especially for reproducible goods and service estimates, implies gross underestimation of global costs. Global warming as a consequence of climate change, rising human population densities and intensifying international trade will allow these costly insects to spread into new areas, but substantial savings could be achieved by increasing surveillance, containment and public awareness. PMID:27698460

  1. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  2. Fast Flood damage estimation coupling hydraulic modeling and Multisensor Satellite data

    NASA Astrophysics Data System (ADS)

    Fiorini, M.; Rudari, R.; Delogu, F.; Candela, L.; Corina, A.; Boni, G.

    2011-12-01

    Damage estimation requires a good representation of the Elements at risk and their vulnerability, the knowledge of the flooded area extension and the description of the hydraulic forcing. In this work the real time use of a simplified two dimensional hydraulic model constrained by satellite retrieved flooded areas is analyzed. The main features of such a model are computational speed and simple start-up, with no need to insert complex information but a subset of simplified boundary and initial condition. Those characteristics allow the model to be fast enough to be used in real time for the simulation of flooding events. The model fills the gap of information left by single satellite scenes of flooded area, allowing for the estimation of the maximum flooding extension and magnitude. The static information provided by earth observation (like SAR extension of flooded areas at a certain time) are interpreted in a dynamic consistent way and very useful hydraulic information (e.g., water depth, water speed and the evolution of flooded areas)are provided. These information are merged with satellite identification of elements exposed to risk that are characterized in terms of their vulnerability to floods in order to obtain fast estimates of Food damages. The model has been applied in several flooding events occurred worldwide. amongst the other activations in the Mediterranean areas like Veneto (IT) (October 2010), Basilicata (IT) (March 2011) and Shkoder (January 2010 and December 2010) are considered and compared with larger types of floods like the one of Queensland in December 2010.

  3. 18 CFR 1301.19 - Appeals on initial adverse agency determination on correction or amendment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Appeals on initial... Water Resources TENNESSEE VALLEY AUTHORITY PROCEDURES Privacy Act § 1301.19 Appeals on initial adverse.... If the reviewing official finds good cause for an extension, TVA will inform the appellant in writing...

  4. Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models

    NASA Astrophysics Data System (ADS)

    Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.

    2018-02-01

    In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.

  5. The influence of school demographic factors and perceived student discrimination on delinquency trajectory in adolescence.

    PubMed

    Le, Thao N; Stockdale, Gary

    2011-10-01

    The purpose of this study was to examine the effects of school demographic factors and youth's perception of discrimination on delinquency in adolescence and into young adulthood for African American, Asian, Hispanic, and white racial/ethnic groups. Using data from the National Longitudinal Study of Adolescent Health (Add Health), models testing the effect of school-related variables on delinquency trajectories were evaluated for the four racial/ethnic groups using Mplus 5.21 statistical software. Results revealed that greater student ethnic diversity and perceived discrimination, but not teacher ethnic diversity, resulted in higher initial delinquency estimates at 13 years of age for all groups. However, except for African Americans, having a greater proportion of female teachers in the school decreased initial delinquency estimates. For African Americans and whites, a larger school size also increased the initial estimates. Additionally, lower social-economic status increased the initial estimates for whites, and being born in the United States increased the initial estimates for Asians and Hispanics. Finally, regardless of the initial delinquency estimate at age 13 and the effect of the school variables, all groups eventually converged to extremely low delinquency in young adulthood, at the age of 21 years. Educators and public policy makers seeking to prevent and reduce delinquency can modify individual risks by modifying characteristics of the school environment. Policies that promote respect for diversity and intolerance toward discrimination, as well as training to help teachers recognize the precursors and signs of aggression and/or violence, may also facilitate a positive school environment, resulting in lower delinquency. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  6. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  7. General relativistic satellite astrometry. II. Modeling parallax and proper motion

    NASA Astrophysics Data System (ADS)

    de Felice, F.; Bucciarelli, B.; Lattanzi, M. G.; Vecchiato, A.

    2001-07-01

    The non-perturbative general relativistic approach to global astrometry introduced by de Felice et al. (\\cite{defetal}) is here extended to account for the star motions on the Schwarzschild celestial sphere. A new expression of the observables, i.e. angular distances among stars, is provided, which takes into account the effects of parallax and proper motions. This dynamical model is then tested on an end-to-end simulation of the global astrometry mission GAIA. The results confirm the findings of our earlier work, which applied to the case of a static (angular coordinates only) sphere. In particular, measurements of large arcs among stars (each measurement good to ~ 100 mu arcsec, as expected for V ~ 17 mag stars) repeated over an observing period comparable to the mission lifetime foreseen for GAIA, can be modeled to yield estimates of positions, parallaxes, and annual proper motions good to ~ 15 mu arcsec. This second round of experiments confirms, within the limitations of the simulation and the assumptions of the current relativistic model, that the space-born global astrometry initiated with Hipparcos can be pushed down to the 10-5 arcsec accuracy level proposed with the GAIA mission. Finally, the simplified case we have solved can be used as reference for testing the limiting behavior of more realistic models as they become available.

  8. High Temperature Tensile Properties of Unidirectional Hi-Nicalon/Celsian Composites In Air

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John Z.; Bansal, Narottam P.

    2000-01-01

    High temperature tensile properties of unidirectional BN/SiC-coated Hi-Nicalon SiC fiber reinforced celsian matrix composites have been measured from room temperature to 1200 C (2190 F) in air. Young's modulus, the first matrix cracking stress, and the ultimate strength decreased from room temperature to 1200 C (2190 F). The applicability of various micromechanical models, in predicting room temperature values of various mechanical properties for this CMC, has also been investigated. The simple rule of mixtures produced an accurate estimate of the primary composite modulus. The first matrix cracking stress estimated from ACK theory was in good agreement with the experimental value. The modified fiber bundle failure theory of Evans gave a good estimate of the ultimate strength.

  9. An Integrated Approach to Indoor and Outdoor Localization

    DTIC Science & Technology

    2017-04-17

    localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to

  10. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    PubMed Central

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277

  11. 78 FR 57405 - Agency Information Collection Activities: Transportation Entry and Manifest of Goods Subject to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-18

    ... keepers from the collection of information (a total capital/startup costs and operations and maintenance.... Estimated Time per Response: 10 minutes. Estimated Total Annual Burden Hours: 896,400 hours. Dated...

  12. Constitutive Modeling of Porcine Liver in Indentation Using 3D Ultrasound Imaging

    PubMed Central

    Jordan, P.; Socrate, S.; Zickler, T.E.; Howe, R.D.

    2009-01-01

    In this work we present an inverse finite-element modeling framework for constitutive modeling and parameter estimation of soft tissues using full-field volumetric deformation data obtained from 3D ultrasound. The finite-element model is coupled to full-field visual measurements by regularization springs attached at nodal locations. The free ends of the springs are displaced according to the locally estimated tissue motion and the normalized potential energy stored in all springs serves as a measure of model-experiment agreement for material parameter optimization. We demonstrate good accuracy of estimated parameters and consistent convergence properties on synthetically generated data. We present constitutive model selection and parameter estimation for perfused porcine liver in indentation and demonstrate that a quasilinear viscoelastic model with shear modulus relaxation offers good model-experiment agreement in terms of indenter displacement (0.19 mm RMS error) and tissue displacement field (0.97 mm RMS error). PMID:19627823

  13. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... approved in advance by the Administrator. (1) Follow good engineering practices for initial instrument.... (2) Optimize the FID's response on the most common operating range. The response is to be optimized... different hydrocarbon species that are expected to be in the exhaust. Good engineering judgment is to be...

  14. Integrated Corridor Management (ICM) Initiative : ICM Surveillance and Detection Requirements for Arterial and Transit Networks

    DOT National Transportation Integrated Search

    2008-10-01

    The primary objective of the ICM Initiative is to demonstrate how Intelligent Transportation System (ITS) technologies can efficiently and proactively facilitate the movement of people and goods through major transportation corridors that comprise a ...

  15. British Thoracic Society Guideline for the initial outpatient management of pulmonary embolism

    PubMed Central

    Howard, Luke S; Barden, Steven; Condliffe, Robin; Connolly, Vincent; Davies, Chris; Donaldson, James; Everett, Bernard; Free, Catherine; Horner, Daniel; Hunter, Laura; Kaler, Jasvinder; Nelson-Piercy, Catherine; O’Dowd, Emma; Patel, Raj; Preston, Wendy; Sheares, Karen; Tait, Campbell

    2018-01-01

    The following is a summary of the recommendations and good practice points for the BTS Guideline for the initial outpatient management of pulmonary embolism. Please refer to the full guideline for full information about each section.

  16. Create a good learning environment and motivate active learning enthusiasm

    NASA Astrophysics Data System (ADS)

    Bi, Weihong; Fu, Guangwei; Fu, Xinghu; Zhang, Baojun; Liu, Qiang; Jin, Wa

    2017-08-01

    In view of the current poor learning initiative of undergraduates, the idea of creating a good learning environment and motivating active learning enthusiasm is proposed. In practice, the professional tutor is allocated and professional introduction course is opened for college freshman. It can promote communication between the professional teachers and students as early as possible, and guide students to know and devote the professional knowledge by the preconceived form. Practice results show that these solutions can improve the students interest in learning initiative, so that the active learning and self-learning has become a habit in the classroom.

  17. MANAGEMENT PROGRAMS FOR REDUCING RISKS OF ASTHMA IN CHILDREN

    EPA Science Inventory

    This paper reviews available national cost of asthma estimates and updates them to 1997, accounting for increases in prices of medical goods and services, changes in the usage of asthma-related medical goods and services, and changes in asthma prevalence and mortality. Available ...

  18. 40 CFR 1065.125 - Engine intake air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engines with multiple intakes with separate humidity measurements at each intake, use a flow-weighted average humidity for NOX corrections. If individual flows of each intake are not measured, use good engineering judgment to estimate a flow-weighted average humidity. (3) Temperature. Good engineering judgment...

  19. Estimation of Organic Vapor Breakthrough in Humidified Activated Carbon Beds: -Application of Wheeler-Jonas Equation, NIOSH MultiVapor™ and RBT (Relative Breakthrough Time)

    PubMed Central

    Abiko, Hironobu; Furuse, Mitsuya; Takano, Tsuguo

    2016-01-01

    Objectives: In the use of activated carbon beds as adsorbents for various types of organic vapor in respirator gas filters, water adsorption of the bed and test gas humidity are expected to alter the accuracy in the estimation of breakthrough data. There is increasing interest in the effects of moisture on estimation methods, and this study has investigated the effects with actual breakthrough data. Methods: We prepared several activated carbon beds preconditioned by equilibration with moisture at different relative humidities (RH=40%-70%) and a constant temperature of 20°C. Then, we measured breakthrough curves in the early region of breakthrough time for 10 types of organic vapor, and investigated the effects of moisture on estimation using the Wheeler-Jonas equation, the simulation software NIOSH MultiVapor™ 2.2.3, and RBT (Relative Breakthrough Time) proposed by Tanaka et al. Results: The Wheeler-Jonas equation showed good accordance with breakthrough curves at all RH in this study. However, the correlation coefficient decreased gradually with increasing RH regardless of type of organic vapor. Estimation of breakthrough time by MultiVapor showed good accordance with experimental data at RH=50%. In contrast, it showed discordance at high RH (>50%). RBTs reported previously were consistent with experimental data at RH=50%. On the other hand, the values of RBT changed markedly with increasing RH. Conclusions: The results of each estimation method showed good accordance with experimental data under comparatively dry conditions (RH≤50%). However, there were discrepancies under high humidified conditions, and further studies are warranted. PMID:27725483

  20. Using aircraft measurements to estimate the magnitude and uncertainty of the shortwave direct radiative forcing of southern African biomass burning aerosol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magi, Brian; Fu, Q.; Redemann, Jens

    2008-03-13

    We estimate the shortwave, diurnally-averaged direct radiative forcing (RF) of the biomass burning aerosol characterized by measurements made from the University of Washington (UW) research aircraft during the Southern African Regional Science Initiative in August and September 2000 (SAFARI-2000). We describe the methodology used to arrive at the best estimates of the measurement-based RF and discuss the confidence intervals of the estimates of RF that arise from uncertainties in measurements and assumptions necessary to describe the aerosol optical properties. We apply the methodology to the UW aircraft vertical profiles and estimate that the top of the atmosphere RF (RFtoa) rangesmore » from -1.5±3.2 to -14.4±3.5 W m-2, while the surface RF (RFsfc) ranges from -10.5±2.4 to -81.3±7.5 W m-2. These estimates imply that the aerosol RF of the atmosphere (RFatm) ranges from 5.0±2.3 to 73.3±11.0 W m-2. We compare some of the estimates to RF that we estimate using Aerosol Robotic Network (AERONET) aerosol optical properties, and show that the agreement is 2 of good for RFtoa, but poor for RFsfc. We also show that linear models accurately describe the relationship of RF with the aerosol optical depth at a wavelength of 550 nm (τ550). This relationship is known as the radiative forcing efficiency (RFE) and we find that RFtoa (unlike RFatm and RFsfc) depends not only on variations in τ550, but that the linear model itself is dependent on the magnitude of τ550. We then apply the models for RFE to daily τ550 derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite to estimate the RF over southern Africa from March 2000 to December 2006. Using the combination of UW and MODIS data, we find that the annual RFtoa, RFatm, and RFsfc over the region is -4.7±2.7 W m-2, 11.4±5.7 W m-2, and -18.3±5.8 W m-2, respectively.« less

  1. Parent-child communication and marijuana initiation: evidence using discrete-time survival analysis.

    PubMed

    Nonnemaker, James M; Silber-Ashley, Olivia; Farrelly, Matthew C; Dench, Daniel

    2012-12-01

    This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or - in the case of youth reports of communication - potentially harmful (leading to increased likelihood of marijuana initiation). Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The reliability of the Glasgow Coma Scale: a systematic review.

    PubMed

    Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R

    2016-01-01

    The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.

  3. Climate sensitivity uncertainty: when is good news bad?

    PubMed

    Freeman, Mark C; Wagner, Gernot; Zeckhauser, Richard J

    2015-11-28

    Climate change is real and dangerous. Exactly how bad it will get, however, is uncertain. Uncertainty is particularly relevant for estimates of one of the key parameters: equilibrium climate sensitivity--how eventual temperatures will react as atmospheric carbon dioxide concentrations double. Despite significant advances in climate science and increased confidence in the accuracy of the range itself, the 'likely' range has been 1.5-4.5°C for over three decades. In 2007, the Intergovernmental Panel on Climate Change (IPCC) narrowed it to 2-4.5°C, only to reverse its decision in 2013, reinstating the prior range. In addition, the 2013 IPCC report removed prior mention of 3°C as the 'best estimate'. We interpret the implications of the 2013 IPCC decision to lower the bottom of the range and excise a best estimate. Intuitively, it might seem that a lower bottom would be good news. Here we ask: when might apparently good news about climate sensitivity in fact be bad news in the sense that it lowers societal well-being? The lowered bottom value also implies higher uncertainty about the temperature increase, definitely bad news. Under reasonable assumptions, both the lowering of the lower bound and the removal of the 'best estimate' may well be bad news. © 2015 The Author(s).

  4. Is What Is Good for General Motors Good for Architecture?

    ERIC Educational Resources Information Center

    Myrick, Richard; And Others

    1966-01-01

    Problems of behavioral evaluation and determination of initial building stimuli are discussed in terms of architectural analysis. Application of management research techniques requires problem and goal definition. Analysis of both lower and higher order needs is contingent upon these definitions. Lower order needs relate to more abstract…

  5. Governance Frameworks for International Public Goods: The Case of Concerted Entrepreneurship

    ERIC Educational Resources Information Center

    Andersson, Thomas; Formica, Piero

    2007-01-01

    In the "participation age", emerging cross-border, transnational communities driven by innovation and entrepreneurship initiatives--in short, international entrepreneurial communities--give impetus to the rise of international public goods. With varying intensity, a non-voting international mobile public--still a small but an increasing fraction…

  6. New formulations for tsunami runup estimation

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Aydin, B.; Ceylan, N.

    2017-12-01

    We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).

  7. Viscosity-adjusted estimation of pressure head and pump flow with quasi-pulsatile modulation of rotary blood pump for a total artificial heart.

    PubMed

    Yurimoto, Terumi; Hara, Shintaro; Isoyama, Takashi; Saito, Itsuro; Ono, Toshiya; Abe, Yusuke

    2016-09-01

    Estimation of pressure and flow has been an important subject for developing implantable artificial hearts. To realize real-time viscosity-adjusted estimation of pressure head and pump flow for a total artificial heart, we propose the table estimation method with quasi-pulsatile modulation of rotary blood pump in which systolic high flow and diastolic low flow phased are generated. The table estimation method utilizes three kinds of tables: viscosity, pressure and flow tables. Viscosity is estimated from the characteristic that differential value in motor speed between systolic and diastolic phases varies depending on viscosity. Potential of this estimation method was investigated using mock circulation system. Glycerin solution diluted with salty water was used to adjust viscosity of fluid. In verification of this method using continuous flow data, fairly good estimation could be possible when differential pulse width modulation (PWM) value of the motor between systolic and diastolic phases was high. In estimation under quasi-pulsatile condition, inertia correction was provided and fairly good estimation was possible when the differential PWM value was high, which was not different from the verification results using continuous flow data. In the experiment of real-time estimation applying moving average method to the estimated viscosity, fair estimation could be possible when the differential PWM value was high, showing that real-time viscosity-adjusted estimation of pressure head and pump flow would be possible with this novel estimation method when the differential PWM value would be set high.

  8. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  9. An Early History of the Rural Community College Initiative: Reflections on the Past and Implications for the Future

    ERIC Educational Resources Information Center

    Kennamer, Mike; Katsinas, Stephen G.

    2011-01-01

    The $17.2 million Rural Community College Initiative (RCCI) demonstration grant program funded by the Ford Foundation which ran from 1994 to 2001 represents the largest philanthropic project specifically aimed at rural community colleges in United States history. While a good deal of literature has been published about this initiative, much was…

  10. The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1

    DTIC Science & Technology

    1978-10-01

    standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial

  11. Integrated corridor management (ICM) initiative : ICM surveillance and detection needs analysis for the transit data gap.

    DOT National Transportation Integrated Search

    2008-11-01

    The primary objective of the ICM Initiative is to demonstrate how Intelligent Transportation System (ITS) technologies can efficiently and proactively facilitate the movement of people and goods through major transportation corridors that comprise a ...

  12. Integrated corridor management (ICM) initiative : ICM surveillance and detection requirements needs analysis for the arterial data gap.

    DOT National Transportation Integrated Search

    2008-11-01

    The primary objective of the ICM Initiative is to demonstrate how Intelligent Transportation System (ITS) technologies can efficiently and proactively facilitate the movement of people and goods through major transportation corridors that comprise a ...

  13. Initial system design method for non-rotationally symmetric systems based on Gaussian brackets and Nodal aberration theory.

    PubMed

    Zhong, Yi; Gross, Herbert

    2017-05-01

    Freeform surfaces play important roles in improving the imaging performance of off-axis optical systems. However, for some systems with high requirements in specifications, the structure of the freeform surfaces could be very complicated and the number of freeform surfaces could be large. That brings challenges in fabrication and increases the cost. Therefore, to achieve a good initial system with minimum aberrations and reasonable structure before implementing freeform surfaces is essential for optical designers. The already existing initial system design methods are limited to certain types of systems. A universal tool or method to achieve a good initial system efficiently is very important. In this paper, based on the Nodal aberration theory and the system design method using Gaussian Brackets, the initial system design method is extended from rotationally symmetric systems to general non-rotationally symmetric systems. The design steps are introduced and on this basis, two off-axis three-mirror systems are pre-designed using spherical shape surfaces. The primary aberrations are minimized using the nonlinear least-squares solver. This work provides insight and guidance for initial system design of off-axis mirror systems.

  14. Assessing efficiency of spatial sampling using combined coverage analysis in geographical and feature spaces

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav

    2015-04-01

    Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.

  15. Hoarse voice in adults: an evidence-based approach to the 12 minute consultation.

    PubMed

    Syed, I; Daniels, E; Bleach, N R

    2009-02-01

    The hoarse voice is a common presentation in the adult ENT clinic. It is estimated that otolaryngology/voice clinics receive over 50 000 patients with dysphonia each year. Good vocal function is estimated to be required for around 1/3 of the labour force to fulfil their job requirements. The assessment and management of the patient with a hoarse voice is potentially a complex and protracted process as the aetiology is often multi-factorial. This article provides a guide for the clinician in the general ENT clinic to make a concise, thorough assessment of the hoarse patient and engage in an evidence based approach to investigation and management. Literature search performed on 4 October 2008 using EMBASE, MEDLINE, Cochrane databases using subject headings hoarse voice or dysphonia in combination with diagnosis, management, investigation, treatment, intervention and surgery. General vocal hygiene is beneficial for non organic dysphonia but the evidence base for individual components is poor. There is a good evidence base for the use of voice therapy as first line treatment of organic dysphonia such as vocal fold nodules and polyps. There is little evidence for surgical intervention as first line therapy for most common benign vocal fold lesions. Surgery is, however, the treatment of choice for hoarseness due to papillomatosis. Both CO(2) laser and microdissection are equally acceptable modalities for surgical resection of common benign vocal fold lesions. Laryngopharyngeal reflux is commonly cited as a cause of hoarseness but the evidence base for treatment with gastric acid suppression is poor. Despite the widespread use of proton pump inhibitors for treating laryngopharyngeal reflux, there is high quality evidence to suggest that they are no more effective than placebo. A concise and thorough approach to assessment in the general ENT clinic will provide the diagnosis and facilitate the management of the hoarse voice in the majority of cases. Voice therapy is an important tool that should be utilised in the general ENT clinic and should not be restricted to the specialist voice clinic. If there is no improvement after initial measures, the larynx appears normal and/or the patient has failed initial speech & language therapy, referral to a specialist voice clinic may be helpful. More research is still required particularly with regard to laryngopharyngeal reflux which is often cited as an important cause of hoarseness but is still poorly understood.

  16. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.

    PubMed

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-11-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.

  17. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08

    PubMed Central

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-01-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127

  18. Estimation of pelvis kinematics in level walking based on a single inertial sensor positioned close to the sacrum: validation on healthy subjects with stereophotogrammetric system.

    PubMed

    Buganè, Francesca; Benedetti, Maria Grazia; D'Angeli, Valentina; Leardini, Alberto

    2014-10-21

    Kinematics measures from inertial sensors have a value in the clinical assessment of pathological gait, to track quantitatively the outcome of interventions and rehabilitation programs. To become a standard tool for clinicians, it is necessary to evaluate their capability to provide reliable and comprehensible information, possibly by comparing this with that provided by the traditional gait analysis. The aim of this study was to assess by state-of-the-art gait analysis the reliability of a single inertial device attached to the sacrum to measure pelvis kinematics during level walking. The output signals of the three-axis gyroscope were processed to estimate the spatial orientation of the pelvis in the sagittal (tilt angle), frontal (obliquity) and transverse (rotation) anatomical planes These estimated angles were compared with those provided by a 8 TV-cameras stereophotogrammetric system utilizing a standard experimental protocol, with four markers on the pelvis. This was observed in a group of sixteen healthy subjects while performing three repetitions of level walking along a 10 meter walkway at slow, normal and fast speeds. The determination coefficient, the scale factor and the bias of a linear regression model were calculated to represent the differences between the angular patterns from the two measurement systems. For the intra-subject variability, one volunteer was asked to repeat walking at normal speed 10 times. A good match was observed for obliquity and rotation angles. For the tilt angle, the pattern and range of motion was similar, but a bias was observed, due to the different initial inclination angle in the sagittal plane of the inertial sensor with respect to the pelvis anatomical frame. A good intra-subject consistency has also been shown by the small variability of the pelvic angles as estimated by the new system, confirmed by very small values of standard deviation for all three angles. These results suggest that this inertial device is a reliable alternative to stereophotogrammetric systems for pelvis kinematics measurements, in addition to being easier to use and cheaper. The device can provide to the patient and to the examiner reliable feedback in real-time during routine clinical tests.

  19. Development of good modelling practice for phsiologically based pharmacokinetic models for use in risk assessment: The first steps

    EPA Science Inventory

    The increasing use of tissue dosimetry estimated using pharmacokinetic models in chemical risk assessments in multiple countries necessitates the need to develop internationally recognized good modelling practices. These practices would facilitate sharing of models and model eva...

  20. Spectrum-based estimators of the bivariate Hurst exponent

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2014-12-01

    We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.

  1. Genetic and environmental influences on cannabis use initiation and problematic use: a meta-analysis of twin studies

    PubMed Central

    Verweij, Karin J.H.; Zietsch, Brendan P.; Lynskey, Michael T.; Medland, Sarah E.; Neale, Michael C.; Martin, Nicholas G.; Boomsma, Dorret I.; Vink, Jacqueline M.

    2009-01-01

    Background Because cannabis use is associated with social, physical and psychological problems, it is important to know what causes some individuals to initiate cannabis use and a subset of those to become problematic users. Previous twin studies found evidence for both genetic and environmental influences on vulnerability, but due to considerable variation in the results it is difficult to draw clear conclusions regarding the relative magnitude of these influences. Method A systematic literature search identified 28 twin studies on cannabis use initiation and 24 studies on problematic cannabis use. The proportion of total variance accounted for by genes (A), shared environment (C), and unshared environment (E) in (1) initiation of cannabis use and (2) problematic cannabis use was calculated by averaging corresponding A, C, and E estimates across studies from independent cohorts and weighting by sample size. Results For cannabis use initiation, A, C, and E estimates were 48%, 25% and 27% in males and 40%, 39% and 21% in females. For problematic cannabis use A, C, and E estimates were 51%, 20% and 29% for males and 59%, 15% and 26% for females. Confidence intervals of these estimates are considerably narrower than those in the source studies. Conclusions Our results indicate that vulnerability to both cannabis use initiation and problematic use was significantly influenced by A, C, and E. There was a trend for a greater C and lesser A component for cannabis initiation as compared to problematic use for females. PMID:20402985

  2. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  3. Influence of Initial Inclined Surface Crack on Estimated Residual Fatigue Lifetime of Railway Axle

    NASA Astrophysics Data System (ADS)

    Náhlík, Luboš; Pokorný, Pavel; Ševčík, Martin; Hutař, Pavel

    2016-11-01

    Railway axles are subjected to cyclic loading which can lead to fatigue failure. For safe operation of railway axles a damage tolerance approach taking into account a possible defect on railway axle surface is often required. The contribution deals with an estimation of residual fatigue lifetime of railway axle with initial inclined surface crack. 3D numerical model of inclined semi-elliptical surface crack in railway axle was developed and its curved propagation through the axle was simulated by finite element method. Presence of press-fitted wheel in the vicinity of initial crack was taken into account. A typical loading spectrum of railway axle was considered and residual fatigue lifetime was estimated by NASGRO approach. Material properties of typical axle steel EA4T were considered in numerical calculations and lifetime estimation.

  4. Estimating satellite pose and motion parameters using a novelty filter and neural net tracker

    NASA Technical Reports Server (NTRS)

    Lee, Andrew J.; Casasent, David; Vermeulen, Pieter; Barnard, Etienne

    1989-01-01

    A system for determining the position, orientation and motion of a satellite with respect to a robotic spacecraft using video data is advanced. This system utilizes two levels of pose and motion estimation: an initial system which provides coarse estimates of pose and motion, and a second system which uses the coarse estimates and further processing to provide finer pose and motion estimates. The present paper emphasizes the initial coarse pose and motion estimation sybsystem. This subsystem utilizes novelty detection and filtering for locating novel parts and a neural net tracker to track these parts over time. Results of using this system on a sequence of images of a spin stabilized satellite are presented.

  5. The prevalence and effects of adult attention-deficit/hyperactivity disorder (ADHD) on the performance of workers: results from the WHO World Mental Health Survey Initiative.

    PubMed

    de Graaf, R; Kessler, R C; Fayyad, J; ten Have, M; Alonso, J; Angermeyer, M; Borges, G; Demyttenaere, K; Gasquet, I; de Girolamo, G; Haro, J M; Jin, R; Karam, E G; Ormel, J; Posada-Villa, J

    2008-12-01

    To estimate the prevalence and workplace consequences of adult attention-deficit/hyperactivity disorder (ADHD). An ADHD screen was administered to 18-44-year-old respondents in 10 national surveys in the WHO World Mental Health (WMH) Survey Initiative (n = 7075 in paid or self-employment; response rate 45.9-87.7% across countries). Blinded clinical reappraisal interviews were administered in the USA to calibrate the screen. Days out of role were measured using the WHO Disability Assessment Schedule (WHO-DAS). Questions were also asked about ADHD treatment. An average of 3.5% of workers in the 10 countries were estimated to meet DSM-IV criteria for adult ADHD (inter-quartile range: 1.3-4.9%). ADHD was more common among males than females and less common among professionals than other workers. ADHD was associated with a statistically significant 22.1 annual days of excess lost role performance compared to otherwise similar respondents without ADHD. No difference in the magnitude of this effect was found by occupation, education, age, gender or partner status. This effect was most pronounced in Colombia, Italy, Lebanon and the USA. Although only a small minority of workers with ADHD ever received treatment for this condition, higher proportions were treated for comorbid mental/substance disorders. ADHD is a relatively common condition among working people in the countries studied and is associated with high work impairment in these countries. This impairment, in conjunction with the low treatment rate and the availability of cost-effective therapies, suggests that ADHD would be a good candidate for targeted workplace screening and treatment programs.

  6. Predictors of perceived asthma control among patients managed in primary care clinics.

    PubMed

    Eilayyan, Owis; Gogovor, Amede; Mayo, Nancy; Ernst, Pierre; Ahmed, Sara

    2015-01-01

    To estimate the extent to which symptom status, physical activity, beliefs about medications, self-efficacy, emotional status, and healthcare utilization predict perceived asthma control over a period of 16 months among a primary care population. The current study is a secondary analysis of data from a longitudinal study that examined health outcomes of asthma among participants recruited from primary care clinics. Path analysis, based on the Wilson and Cleary and International Classification of Functioning, Disability and Health frameworks, was used to estimate the predictors of perceived asthma control. The path analysis identified initial perceived asthma control asthma (β = 0.43, p < 0.0001), symptoms (β = 0.35, p < 0.0001), physical activity (β = 0.27, p < 0.0001), and self-efficacy (β = 0.29, p < 0.0001) as significant predictors of perceived asthma control (total effects, i.e., direct and indirect), while emotional status (β = 0.08, p = 0.03) was a significant indirect predictor through physical activity. The model explained 24 % of the variance of perceived asthma control. Overall, the model fits the data well (χ (2) = 6.65, df = 6, p value = 0.35, root-mean-square error of approximation = 0.02, Comparative Fit Index = 0.999, and weighted root-mean-square residual = 0.27). Initial perceived asthma control, current symptoms status, physical activity, and self-efficacy can be used to identify individuals likely to have good perceived asthma control in the future. Emotional status also has an impact on perceived asthma control mediated through physical activity and should be considered when planning patient management. Identifying these predictors is important to help the care team tailor interventions that will allow individuals to optimally manage their asthma, to prevent exacerbations, to prevent other respiratory-related chronic disease, and to maximize quality of life.

  7. Cosmological Perturbation Theory and the Spherical Collapse model - I. Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Fosalba, Pablo; Gaztanaga, Enrique

    1998-12-01

    We present a simple and intuitive approximation for solving the perturbation theory (PT) of small cosmic fluctuations. We consider only the spherically symmetric or monopole contribution to the PT integrals, which yields the exact result for tree-graphs (i.e. at leading order). We find that the non-linear evolution in Lagrangian space is then given by a simple local transformation over the initial conditions, although it is not local in Euler space. This transformation is found to be described by the spherical collapse (SC) dynamics, as it is the exact solution in the shearless (and therefore local) approximation in Lagrangian space. Taking advantage of this property, it is straightforward to derive the one-point cumulants, xi_J, for both the unsmoothed and smoothed density fields to arbitrary order in the perturbative regime. To leading-order this reproduces, and provides us with a simple explanation for, the exact results obtained by Bernardeau. We then show that the SC model leads to accurate estimates for the next corrective terms when compared with the results derived in the exact perturbation theory making use of the loop calculations. The agreement is within a few per cent for the hierarchical ratios S_J=xi_J/xi^J-1_2. We compare our analytic results with N-body simulations, which turn out to be in very good agreement up to scales where sigma~1. A similar treatment is presented to estimate higher order corrections in the Zel'dovich approximation. These results represent a powerful and readily usable tool to produce analytical predictions that describe the gravitational clustering of large-scale structure in the weakly non-linear regime.

  8. The Properties and Fate of the Galactic Center G2 Cloud

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Roman V.

    2014-03-01

    The object G2 was recently discovered descending into the gravitational potential of the supermassive black hole (BH) Sgr A*. We test the photoionized cloud scenario, determine the cloud properties, and estimate the emission during the pericenter passage. The incident radiation is computed starting from the individual stars at the locations of G2. The radiative transfer calculations are conducted with CLOUDY code and 2011 broadband and line luminosities are fitted. The spherically symmetric, tidally distorted, and magnetically arrested cloud shapes are tested with both the interstellar medium dust and 10 nm graphite dust. The best-fitting magnetically arrested model has the initial density n init = 1.8 × 105 cm-3, initial radius R init = 2.2 × 1015 cm = 17 mas, mass m cloud = 4 M Earth, and dust relative abundance A = 0.072. It provides a good fit to 2011 data, is consistent with the luminosities in 2004 and 2008, and reaches an agreement with the observed size. We revise down the predicted radio and X-ray bow shock luminosities to be below the quiescent level of Sgr A*, which readily leads to non-detection in agreement to observations. The magnetic energy dissipation in the cloud at the pericenter coupled with more powerful irradiation may lead to an infrared source with an apparent magnitude m_{L^{\\prime }}\\approx 13.0. No shock into the cloud and no X-rays are expected from cloud squeezing by the ambient gas pressure. Larger than previously estimated cloud mass m cloud = (4-20) M Earth may produce a higher accretion rate and a brighter state of Sgr A* as the debris descend onto the BH.

  9. Radar cross section models for limited aspect angle windows

    NASA Astrophysics Data System (ADS)

    Robinson, Mark C.

    1992-12-01

    This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.

  10. Robust point matching via vector field consensus.

    PubMed

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  11. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  12. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  13. Modern psychometrics for assessing achievement goal orientation: a Rasch analysis.

    PubMed

    Muis, Krista R; Winne, Philip H; Edwards, Ordene V

    2009-09-01

    A program of research is needed that assesses the psychometric properties of instruments designed to quantify students' achievement goal orientations to clarify inconsistencies across previous studies and to provide a stronger basis for future research. We conducted traditional psychometric and modern Rasch-model analyses of the Achievement Goals Questionnaire (AGQ, Elliot & McGregor, 2001) and the Patterns of Adaptive Learning Scale (PALS, Midgley et al., 2000) to provide an in-depth analysis of the two most popular instruments in educational psychology. For Study 1, 217 undergraduate students enrolled in educational psychology courses participated. Thirty-four were male and 181 were female (two did not respond). Participants completed the AGQ in the context of their educational psychology class. For Study 2, 126 undergraduate students enrolled in educational psychology courses participated. Thirty were male and 95 were female (one did not respond). Participants completed the PALS in the context of their educational psychology class. Traditional psychometric assessments of the AGQ and PALS replicated previous studies. For both, reliability estimates ranged from good to very good for raw subscale scores and fit for the models of goal orientations were good. Based on traditional psychometrics, the AGQ and PALS are valid and reliable indicators of achievement goals. Rasch analyses revealed that estimates of reliability for items were very good but respondent ability estimates varied from poor to good for both the AGQ and PALS. These findings indicate that items validly and reliably reflect a group's aggregate goal orientation, but using either instrument to characterize an individual's goal orientation is hazardous.

  14. Effect of initial moisture content on the in-vessel composting under air pressure of organic fraction of municipal solid waste in Morocco

    PubMed Central

    2013-01-01

    This study aimed to evaluate the effect of initial moisture content on the in-vessel composting under air pressure of organic fraction of municipal solid waste in Morocco in terms of internal temperature, produced gases quantity, organic matter conversion rate, and the quality of the final composts. For this purpose, in-vessel bioreactor was designed and used to evaluate both appropriate initial air pressure and appropriate initial moisture content for the composting process. Moreover, 5 experiments were carried out within initial moisture content of 55%, 65%, 70%, 75% and 85%. The initial air pressure and the initial moisture content of the mixture showed a significant effect on the aerobic composting. The experimental results demonstrated that for composting organic waste, relatively high moisture contents are better at achieving higher temperatures and retaining them for longer times. This study suggested that an initial moisture content of around 75%, under 0.6 bar, can be considered as being suitable for efficient composting of organic fraction of municipal solid waste. These last conditions, allowed maximum value of temperature and final composting product with good physicochemical properties as well as higher organic matter degradation and higher gas production. Moreover, final compost obtained showed good maturity levels and can be used for agricultural applications. PMID:23369502

  15. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  16. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  17. Overall Economy

    ERIC Educational Resources Information Center

    Occupational Outlook Quarterly, 2010

    2010-01-01

    The economy's need for workers originates in the demand for the goods and services that they provide. So, to project employment, the Bureau of Labor Statistics (BLS) starts by projecting the gross domestic product (GDP) for 2018. GDP is the value of the final goods produced and services provided in the United States. Then, BLS estimates the…

  18. Evapotranspiration and Dual Crop Coefficients Sonisa Sharma1, Ayse Irmak12, Anne Parkhurst3, Elizabeth walter-Shea1 and Kenneth G. Hubbard1 1School of Natural Resources, 2Civil Engineering, 3Departments of Statistics, University of Nebraska-Lincoln

    NASA Astrophysics Data System (ADS)

    Sharma, S.

    2012-12-01

    Accurate estimation of water content in the crop root zone is most important for water conservation and management practices like irrigation. The objective of this study is to use the FA0-56 dual crop cefficients: basal crop coefficient Kcb and the soil evaporation coefficient Ke for a large corn/soybean field in the year 2005 at the Mead Turf Farm in the state of Nebraska, USA..Dual crop coefficients can be used to estimate both transpiration from crops and evaporation from soil. The Kcb has a low value of 0.15(K cb, in) during the initial period, increases rapidly to a maximum of 1.14 (K cb, mid) for the entire midseason and decreases rapidly to 0.5 at the end of the corn growing season (K cb,end). When examined together with precipitation, the dual crop coefficient was higher following rainfall or irrigation, as expected. The data suggests that the dual crop coefficient approach is a good estimation of water loss from well-watered crops. Irrigation can be scheduled to replace the loss of water from the crop/soil system. Similarly, when we compared the measured daily ET and the ET calculated from dual crop coefficients, it gives 98 % R2.; Comparision of calculated ET from dual crop coefficient appraoch with Weather Station ET

  19. Leptospirosis disease mapping with standardized morbidity ratio and Poisson-Gamma model: An analysis of Leptospirosis disease in Kelantan, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Awang, Aznida; Azah Samat, Nor

    2017-09-01

    Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have motivated many researchers to introduce other alternative methods for estimating the risk.

  20. Tracking a convoy of multiple targets using acoustic sensor data

    NASA Astrophysics Data System (ADS)

    Damarla, T. R.

    2003-08-01

    In this paper we present an algorithm to track a convoy of several targets in a scene using acoustic sensor array data. The tracking algorithm is based on template of the direction of arrival (DOA) angles for the leading target. Often the first target is the closest target to the sensor array and hence the loudest with good signal to noise ratio. Several steps were used to generate a template of the DOA angle for the leading target, namely, (a) the angle at the present instant should be close to the angle at the previous instant and (b) the angle at the present instant should be within error bounds of the predicted value based on the previous values. Once the template of the DOA angles of the leading target is developed, it is used to predict the DOA angle tracks of the remaining targets. In order to generate the tracks for the remaining targets, a track is established if the angles correspond to the initial track values of the first target. Second the time delay between the first track and the remaining tracks are estimated at the highest correlation points between the first track and the remaining tracks. As the vehicles move at different speeds the tracks either compress or expand depending on whether a target is moving fast or slow compared to the first target. The expansion and compression ratios are estimated and used to estimate the predicted DOA angle values of the remaining targets. Based on these predicted DOA angles of the remaining targets the DOA angles obtained from the MVDR or Incoherent MUSIC will be appropriately assigned to proper tracks. Several other rules were developed to avoid mixing the tracks. The algorithm is tested on data collected at Aberdeen Proving Ground with a convoy of 3, 4 and 5 vehicles. Some of the vehicles are tracked and some are wheeled vehicles. The tracking algorithm results are found to be good. The results will be presented at the conference and in the paper.

  1. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  2. Estimation of teleported and gained parameters in a non-inertial frame

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2017-04-01

    Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.

  3. Preparing for Evaluation: Lessons from the Evaluability Assessment of the Teagle Foundation's College-Community Connections Initiative. Report

    ERIC Educational Resources Information Center

    Black, Kristin

    2016-01-01

    Funders, policymakers, and program leaders recognize the value of high-quality evidence. To make good use of a program evaluation, initiatives must contend with a set of fundamental questions first. Some of these are about the initiative itself: What outcomes does it seek to affect? Are daily activities in line with long-term goals? Others are…

  4. The "Good Faith" Requirement in School Desegregation Cases.

    ERIC Educational Resources Information Center

    Patin, Charles L., Jr.; Gordon, William M.

    The good-faith requirement in school desegregation was initially discussed by the United States Supreme Court in "Brown II." However, it was not until recently, in "Freeman v. Pitts," that the Court was to provide a definitive statement as to the meaning of the requirement, indicate the need for specific findings with respect…

  5. Younger and Older Adults' "Good-Enough" Interpretations of Garden-Path Sentences

    ERIC Educational Resources Information Center

    Christianson, Kiel; Williams, Carrick C.; Zacks, Rose T.; Ferreira, Fernanda

    2006-01-01

    We report 3 experiments that examined younger and older adults' reliance on "good-enough" interpretations for garden-path sentences (e.g., "While Anna dressed the baby played in the crib") as indicated by their responding "Yes" to questions probing the initial, syntactically unlicensed interpretation (e.g., "Did Anna dress the baby?"). The…

  6. Good Moments in Gestalt Therapy: A Descriptive Analysis of Two Perls Sessions.

    ERIC Educational Resources Information Center

    Boulet, Donald; And Others

    1993-01-01

    Analyzed two Gestalt therapy sessions conducted by Fritz Perls using category system for identifying in-session client behaviors valued by Gestalt therapists. Four judges independently rated 210 client statements. Found common pattern of therapeutic movement: initial phase dominated by building block good moments and second phase characterized by…

  7. 20 Suggestions for Improving the Departmental Procedures for Hiring Teachers of Sociology.

    ERIC Educational Resources Information Center

    Ewens, Bill

    Twenty suggestions are given to help university sociology departments develop procedures for hiring good teachers in the field. The first five ideas are about publicizing the position and initial screening of applications. Jobs should be announced in professional journals and at graduate departments with good reputations. Standardized forms should…

  8. Student Accommodation Projects: A Guide to PFI Contracts. Good Practice.

    ERIC Educational Resources Information Center

    Curtis, Pinsent

    This guide is intended for higher education institutions in England that are about to embark on student residential accommodation projects. It focuses on procurements under the Private Financial Initiative (PFI), a form of Public Private Partnership in the United Kingdom, but other approaches are considered. The guide draws on good practices from…

  9. Anti-Counterfeiting

    NASA Astrophysics Data System (ADS)

    Tuyls, Pim; Guajardo, Jorge; Batina, Lejla; Kerins, Tim

    Counterfeiting of goods is becoming a very huge problem for our society. It not only has a global economic impact, but it also poses a serious threat to our global safety and health. Currently, global economic damage across all industries due to the counterfeiting of goods is estimated at over 600 billion annually [2]. In the United States, seizure of counterfeit goods has tripled in the last 5 years, and in Europe, over 100 million pirated and counterfeit goods were seized in 2004. Fake products cost businesses in the United Kingdom approximately 17 billion [2]. In India, 15% of fast-moving consumer goods and 38% of auto parts are counterfeit. Other industries in which many goods are being counterfeit are the toy industry, content and software, cosmetics, publishing, food and beverages, tobacco, apparel, sports goods, cards, and so forth.

  10. Rewards and the evolution of cooperation in public good games.

    PubMed

    Sasaki, Tatsuya; Uchida, Satoshi

    2014-01-01

    Properly coordinating cooperation is relevant for resolving public good problems, such as clean energy and environmental protection. However, little is known about how individuals can coordinate themselves for a certain level of cooperation in large populations of strangers. In a typical situation, a consensus-building process rarely succeeds, owing to a lack of face and standing. The evolution of cooperation in this type of situation is studied here using threshold public good games, in which cooperation prevails when it is initially sufficient, or otherwise it perishes. While punishment is a powerful tool for shaping human behaviours, institutional punishment is often too costly to start with only a few contributors, which is another coordination problem. Here, we show that whatever the initial conditions, reward funds based on voluntary contribution can evolve. The voluntary reward paves the way for effectively overcoming the coordination problem and efficiently transforms freeloaders into cooperators with a perceived small risk of collective failure.

  11. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data-The LEA algorithm.

    PubMed

    Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan

    2017-10-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be obtained. The algorithm provides a systematic and repeatable approach to estimating the antimicrobial exposure throughout the rearing period, independent of rearing site for finisher batches, as a lifetime exposure measurement. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Multiple scene attitude estimator performance for LANDSAT-1

    NASA Technical Reports Server (NTRS)

    Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.

    1979-01-01

    Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.

  13. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  14. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  15. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  16. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  17. First Do No Harm. Carnegie Perspectives

    ERIC Educational Resources Information Center

    McCormick, Alexander C.

    2007-01-01

    The author expresses concern that launching an accountability initiative without careful consideration may do more harm than good. A well-designed accountability system, writes McCormick, motivates substantive change and useful diagnostic tools must not be undermined in the name of accountability. Several new college-quality initiatives offer…

  18. Brownfields: Recent federal and Massachusetts developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abelson, N.; McCaffery, M.

    While EPA`s recent efforts, including its Brownfields Action Agenda, are clearly positive developments, by far most of the action in the Brownfields area has been at the state level. The Massachusetts Clean Sites Initiative is one of more than twenty state programs adopted across the country to encourage Brownfields redevelopment. The Clean Sites Initiative is a good example of using a carrot and not only a stick to address hazardous waste problems. It is also a good example of government, the business community, and other affected stakeholders working together to develop a program that helps achieve shared goals, which ismore » effectively a requirement in the Brownfields area.« less

  19. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  20. Relation between the structural parameters of metallic glasses at the onset crystallization temperatures and threshold values of the effective diffusion coefficients

    NASA Astrophysics Data System (ADS)

    Tkatch, V. I.; Svyrydova, K. A.; Vasiliev, S. V.; Kovalenko, O. V.

    2017-08-01

    Using the results of differential scanning calorimetry and X-ray diffractometry, an analysis has been carried out of the initial stages of the eutectic and primary mechanisms of crystallization of a series of metallic glasses based on Fe and Al with the established temperature dependences of the effective diffusion coefficients. Analytical relationships, which relate the volume density of crystallites formed in the glasses at the temperatures of the onset of crystallization with the values of the effective diffusion coefficients at these temperatures have been proposed. It has been established that, in the glasses, the crystallization of which begins at the lower boundary of the threshold values of the effective diffusion coefficients ( 10-20 m2/s), structures are formed with the volume density of crystallites on the order of 1023-1024 m-3 and, at the upper boundary (10-18 m2/s), of the order of 1018 and 1020 m-3 in the glasses that are crystallized via the eutectic and primary mechanisms, respectively. Good agreement between the calculated and experimental estimates indicates that the threshold values of the effective diffusion coefficients are the main factors that determine the structure of glasses at the initial stages of crystallization.

  1. Single-footprint retrievals of temperature, water vapor and cloud properties from AIRS

    NASA Astrophysics Data System (ADS)

    Irion, Fredrick W.; Kahn, Brian H.; Schreier, Mathias M.; Fetzer, Eric J.; Fishbein, Evan; Fu, Dejian; Kalmus, Peter; Wilson, R. Chris; Wong, Sun; Yue, Qing

    2018-02-01

    Single-footprint Atmospheric Infrared Sounder spectra are used in an optimal estimation-based algorithm (AIRS-OE) for simultaneous retrieval of atmospheric temperature, water vapor, surface temperature, cloud-top temperature, effective cloud optical depth and effective cloud particle radius. In a departure from currently operational AIRS retrievals (AIRS V6), cloud scattering and absorption are in the radiative transfer forward model and AIRS single-footprint thermal infrared data are used directly rather than cloud-cleared spectra (which are calculated using nine adjacent AIRS infrared footprints). Coincident MODIS cloud data are used for cloud a priori data. Using single-footprint spectra improves the horizontal resolution of the AIRS retrieval from ˜ 45 to ˜ 13.5 km at nadir, but as microwave data are not used, the retrieval is not made at altitudes below thick clouds. An outline of the AIRS-OE retrieval procedure and information content analysis is presented. Initial comparisons of AIRS-OE to AIRS V6 results show increased horizontal detail in the water vapor and relative humidity fields in the free troposphere above the clouds. Initial comparisons of temperature, water vapor and relative humidity profiles with coincident radiosondes show good agreement. Future improvements to the retrieval algorithm, and to the forward model in particular, are discussed.

  2. Fracture in Phenolic Impregnated Carbon Ablator

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Chavez-Garcia, Jose F.

    2011-01-01

    The thermal protection materials used for spacecraft heat shields are subjected to various thermal-mechanical loads during an atmospheric entry which can threaten the structural integrity of the system. This paper discusses the development of a novel technique to understand the failure mechanisms inside thermal protection materials. The focus of research is Phenolic Impregnated Carbon Ablator (PICA). It has successfully flown on the Stardust spacecraft and is the TPS material chosen for the Mars Science Laboratory (MSL) and Dragon spacecraft. Although PICA has good thermal properties, structurally, it is a weak material. In order to thoroughly understand failure in PICA, fracture tests were performed on FiberForm* (precursor of PICA), virgin and charred PICA materials. Several samples of these materials were tested to investigate failure mechanisms at a microstructural scale. Stress-strain data were obtained simultaneously to estimate the fracture toughness. It was found that cracks initiated and grew in the FiberForm when a critical stress limit was reached such that the carbon fibers separated from the binder. However, both for virgin and charred PICA, crack initiation and growth occurred in the matrix (phenolic) phase. Both virgin and charred PICA showed greater strength values compared to FiberForm coupons, confirming that the presence of the porous matrix helps in absorbing the fracture energy.

  3. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.

  4. Exploring the Impact of Different Input Data Types on Soil Variable Estimation Using the ICRAF-ISRIC Global Soil Spectral Database.

    PubMed

    Aitkenhead, Matt J; Black, Helaina I J

    2018-02-01

    Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2  > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.

  5. Trajectory-Based Takeoff Time Predictions Applied to Tactical Departure Scheduling: Concept Description, System Design, and Initial Observations

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn A.; Capps, Alan

    2011-01-01

    Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.

  6. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  7. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  8. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Preliminary comparison between real-time in-vivo spectral and transverse oscillation velocity estimates

    NASA Astrophysics Data System (ADS)

    Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per; Hansen, Jens Munk; Lindskov Hansen, Kristoffer; Bachmann Nielsen, Michael; Jensen, Jørgen Arendt

    2011-03-01

    Spectral velocity estimation is considered the gold standard in medical ultrasound. Peak systole (PS), end diastole (ED), and resistive index (RI) are used clinically. Angle correction is performed using a flow angle set manually. With Transverse Oscillation (TO) velocity estimates the flow angle, peak systole (PSTO), end diastole (EDTO), and resistive index (RITO) are estimated. This study investigates if these clinical parameters are estimated equally good using spectral and TO data. The right common carotid arteries of three healthy volunteers were scanned longitudinally. Average TO flow angles and std were calculated { 52+/-18 ; 55+/-23 ; 60+/-16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76+/-15 ; 89+/-28 ; 77+/-7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10+/-3 ; 14+/-8 ; 15+/-3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.87+/-0.05 ; 0.79+/-0.21 ; 0.79+/-0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within +/-two std of the spectral angle. TO velocity estimates are within +/-three std of the spectral estimates. RITO are within +/-two std of the spectral estimates. Preliminary data indicates that the TO and spectral velocity estimates are equally good. With TO there is no manual angle setting and no flow angle limitation. TO velocity estimation can also automatically handle situations where the angle varies over the cardiac cycle. More detailed temporal and spatial vector estimates with diagnostic potential are available with the TO velocity estimation.

  10. Estimating initial contaminant mass based on fitting mass-depletion functions to contaminant mass discharge data: Testing method efficacy with SVE operations data

    NASA Astrophysics Data System (ADS)

    Mainhagu, J.; Brusseau, M. L.

    2016-09-01

    The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.

  11. A Field Study on Simulation of CO 2 Injection and ECBM Production and Prediction of CO 2 Storage Capacity in Unmineable Coal Seam

    DOE PAGES

    He, Qin; Mohaghegh, Shahab D.; Gholami, Vida

    2013-01-01

    CO 2 sequestration into a coal seam project was studied and a numerical model was developed in this paper to simulate the primary and secondary coal bed methane production (CBM/ECBM) and carbon dioxide (CO 2 ) injection. The key geological and reservoir parameters, which are germane to driving enhanced coal bed methane (ECBM) and CO 2 sequestration processes, including cleat permeability, cleat porosity, CH 4 adsorption time, CO 2 adsorption time, CH 4 Langmuir isotherm, CO 2 Langmuir isotherm, and Palmer and Mansoori parameters, have been analyzed within a reasonable range. The model simulation results showed good matches for bothmore » CBM/ECBM production and CO 2 injection compared with the field data. The history-matched model was used to estimate the total CO 2 sequestration capacity in the field. The model forecast showed that the total CO 2 injection capacity in the coal seam could be 22,817 tons, which is in agreement with the initial estimations based on the Langmuir isotherm experiment. Total CO 2 injected in the first three years was 2,600 tons, which according to the model has increased methane recovery (due to ECBM) by 6,700 scf/d.« less

  12. Mapping auditory nerve firing density using high-level compound action potentials and high-pass noise masking a

    PubMed Central

    Earl, Brian R.; Chertoff, Mark E.

    2012-01-01

    Future implementation of regenerative treatments for sensorineural hearing loss may be hindered by the lack of diagnostic tools that specify the target(s) within the cochlea and auditory nerve for delivery of therapeutic agents. Recent research has indicated that the amplitude of high-level compound action potentials (CAPs) is a good predictor of overall auditory nerve survival, but does not pinpoint the location of neural damage. A location-specific estimate of nerve pathology may be possible by using a masking paradigm and high-level CAPs to map auditory nerve firing density throughout the cochlea. This initial study in gerbil utilized a high-pass masking paradigm to determine normative ranges for CAP-derived neural firing density functions using broadband chirp stimuli and low-frequency tonebursts, and to determine if cochlear outer hair cell (OHC) pathology alters the distribution of neural firing in the cochlea. Neural firing distributions for moderate-intensity (60 dB pSPL) chirps were affected by OHC pathology whereas those derived with high-level (90 dB pSPL) chirps were not. These results suggest that CAP-derived neural firing distributions for high-level chirps may provide an estimate of auditory nerve survival that is independent of OHC pathology. PMID:22280596

  13. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  14. A fracture mechanics approach for estimating fatigue crack initiation in carbon and low-alloy steels in LWR coolant environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, H. B.; Chopra, O. K.

    2000-04-10

    A fracture mechanics approach for elastic-plastic materials has been used to evaluate the effects of light water reactor (LWR) coolant environments on the fatigue lives of carbon and low-alloy steels. The fatigue life of such steel, defined as the number of cycles required to form an engineering-size crack, i.e., 3-mm deep, is considered to be composed of the growth of (a) microstructurally small cracks and (b) mechanically small cracks. The growth of the latter was characterized in terms of {Delta}J and crack growth rate (da/dN) data in air and LWR environments; in water, the growth rates from long crack testsmore » had to be decreased to match the rates from fatigue S-N data. The growth of microstructurally small cracks was expressed by a modified Hobson relationship in air and by a slip dissolution/oxidation model in water. The crack length for transition from a microstructurally small crack to a mechanically small crack was based on studies on small crack growth. The estimated fatigue S-N curves show good agreement with the experimental data for these steels in air and water environments. At low strain amplitudes, the predicted lives in water can be significantly lower than the experimental values.« less

  15. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  16. Subnanosecond measurements of detonation fronts in solid high explosives

    NASA Astrophysics Data System (ADS)

    Sheffield, S. A.; Bloomquist, D. D.; Tarver, C. M.

    1984-04-01

    Detonation fronts in solid high explosives have been examined through measurements of particle velocity histories resulting from the interaction of a detonation wave with a thin metal foil backed by a water window. Using a high time resolution velocity-interferometer system, experiments were conducted on three explosives—a TATB (1,3,5-triamino-trinitrobenzene)-based explosive called PBX-9502, TNT (2,4,6-Trinitrotoluene), and CP (2-{5-cyanotetrazolato} pentaamminecobalt {III} perchlorate). In all cases, detonation-front rise times were found to be less than the 300 ps resolution of the interferometer system. The thermodynamic state in the front of the detonation wave was estimated to be near the unreacted state determined from an extrapolation of low-pressure unreacted Hugoniot data for both TNT and PBX-9502 explosives. Computer calculations based on an ignition and growth model of a Zeldovich-von Neumann-Doering (ZND) detonation wave show good agreement with the measurements. By using the unreacted Hugoniot and a JWL equation of state for the reaction products, we estimated the initial reaction rate in the high explosive after the detonation wave front interacted with the foil to be 40 μs-1 for CP, 60 μs-1 for TNT, and 80 μs-1 for PBX-9502. The shape of the profiles indicates the reaction rate decreases as reaction proceeds.

  17. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  18. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  19. Want to Change the World? Here's How

    ERIC Educational Resources Information Center

    Mangan, Katherine

    2009-01-01

    About 1,000 students from 50 states and 60 countries attended the second annual Clinton Global Initiative University. The project, an outgrowth of the Clinton Global Initiative for world leaders, challenges participants to take "good intentions and turn them into measurable changes in other people's lives" by submitting detailed…

  20. Validation of an Algorithm to Predict the Likelihood of an 8/8 HLA-Matched Unrelated Donor at Search Initiation.

    PubMed

    Davis, Eric; Devlin, Sean; Cooper, Candice; Nhaissi, Melissa; Paulson, Jennifer; Wells, Deborah; Scaradavou, Andromachi; Giralt, Sergio; Papadopoulos, Esperanza; Kernan, Nancy A; Byam, Courtney; Barker, Juliet N

    2018-05-01

    A strategy to rapidly determine if a matched unrelated donor (URD) can be secured for allograft recipients is needed. We sought to validate the accuracy of (1) HapLogic match predictions and (2) a resultant novel Search Prognosis (SP) patient categorization that could predict 8/8 HLA-matched URD(s) likelihood at search initiation. Patient prognosis categories at search initiation were correlated with URD confirmatory typing results. HapLogic-based SP categorizations accurately predicted the likelihood of an 8/8 HLA-match in 830 patients (1530 donors tested). Sixty percent of patients had 8/8 URD(s) identified. Patient SP categories (217 very good, 104 good, 178 fair, 33 poor, 153 very poor, 145 futile) were associated with a marked progressive decrease in 8/8 URD identification and transplantation. Very good to good categories were highly predictive of identifying and receiving an 8/8 URD regardless of ancestry. Europeans in fair/poor categories were more likely to identify and receive an 8/8 URD compared with non-Europeans. In all ancestries very poor and futile categories predicted no 8/8 URDs. HapLogic permits URD search results to be predicted once patient HLA typing and ancestry is obtained, dramatically improving search efficiency. Poor, very poor, andfutile searches can be immediately recognized, thereby facilitating prompt pursuit of alternative donors. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  1. Evaluating Principal Surrogate Markers in Vaccine Trials in the Presence of Multiphase Sampling

    PubMed Central

    Huang, Ying

    2017-01-01

    Summary This paper focuses on the evaluation of vaccine-induced immune responses as principal surrogate markers for predicting a given vaccine’s effect on the clinical endpoint of interest. To address the problem of missing potential outcomes under the principal surrogate framework, we can utilize baseline predictors of the immune biomarker(s) or vaccinate uninfected placebo recipients at the end of the trial and measure their immune biomarkers. Examples of good baseline predictors are baseline immune responses when subjects enrolled in the trial have been previously exposed to the same antigen, as in our motivating application of the Zostavax Efficacy and Safety Trial (ZEST). However, laboratory assays of these baseline predictors are expensive and therefore their subsampling among participants is commonly performed. In this paper we develop a methodology for estimating principal surrogate values in the presence of baseline predictor subsampling. Under a multiphase sampling framework, we propose a semiparametric pseudo-score estimator based on conditional likelihood and also develop several alternative semiparametric pseudo-score or estimated likelihood estimators. We derive corresponding asymptotic theories and analytic variance formulas for these estimators. Through extensive numeric studies, we demonstrate good finite sample performance of these estimators and the efficiency advantage of the proposed pseudo-score estimator in various sampling schemes. We illustrate the application of our proposed estimators using data from an immune biomarker study nested within the ZEST trial. PMID:28653408

  2. Do Liberal Arts Colleges Really Foster Good Practices in Undergraduate Education?

    ERIC Educational Resources Information Center

    Pascarella, Ernest T.; Cruce, Ty M.; Wolniak, Gregory C.; Blaich, Charles F.

    2004-01-01

    Researchers estimated the net effects of liberal arts colleges on 19 measures of good practices in undergraduate education grouped into seven categories. Analyses of 3-year longitudinal data from five liberal arts colleges, four research universities, and seven regional universities were conducted. Net of a battery of student precollege…

  3. Acorn production in red oak

    Treesearch

    Daniel C. Dey

    1995-01-01

    Manipulation of stand stocking through thinning can increase the amount of oak in the upper crown classes and enhance individual tree characteristics that promote good acorn production. Identification of good acorn producers before thinning or shelterwood harvests can be used to retain them in a stand. Stocking charts can be used to time thinnings and to estimate acorn...

  4. Nonparametric Estimation of the Probability of Discovering a New Species.

    DTIC Science & Technology

    1986-01-01

    see Good (1953, 1965), Good and Toulmin (1956), Goodman (1949), Harris (1959, 1968), Knott (1967) and Robbins (1968). We note that our model is not...and Toulmin , G. (1956). The number of new species and the increase of population coverage, when a sample is increased. Biometrika 43, 45-63. Goodman

  5. 77 FR 15187 - Released Rates of Motor Common Carriers of Household Goods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... available cargo-liability options \\2\\ on the written estimate form--the first form that a moving company...-goods freight forwarders. Finally, the Board established April 2, 2011, as the effective date for moving companies to comply with the changes outlined in the two decisions. These Board decisions are available on...

  6. Average discharge, perennial flow initiation, and channel initiation - small southern Appalachian basins

    Treesearch

    B. Lane Rivenbark; C. Rhett Jackson

    2004-01-01

    Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...

  7. On the Error of the Dixon Plot for Estimating the Inhibition Constant between Enzyme and Inhibitor

    ERIC Educational Resources Information Center

    Fukushima, Yoshihiro; Ushimaru, Makoto; Takahara, Satoshi

    2002-01-01

    In textbook treatments of enzyme inhibition kinetics, adjustment of the initial inhibitor concentration for inhibitor bound to enzyme is often neglected. For example, in graphical plots such as the Dixon plot for estimation of an inhibition constant, the initial concentration of inhibitor is usually plotted instead of the true inhibitor…

  8. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  9. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  10. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  11. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  12. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.

  13. Building upon the Great Waters Initiative: Scoping study for potential polyaromatic hydrocarbon deposition into San Diego Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koehler, J.; Sylte, W.W.

    1997-12-31

    The deposition of atmospheric polyaromatic hydrocarbons (PAHs) into San Diego Bay was evaluated at an initial study level. This study was part of an overall initial estimate of PAH waste loading to San Diego Bay from all environmental pathways. The study of air pollutant deposition to water bodies has gained increased attention both as a component of Total Maximum Daily Load (TMDL) determinations required under the Clean Water Act and pursuant to federal funding authorized by the 1990 Clean Air Act Amendments to study the atmospheric deposition of hazardous air pollutants to the Great Waters, which includes coastal waters. Tomore » date, studies under the Clean Air Act have included the Great Lakes, Chesapeake Bay, Lake Champlain, and Delaware Bay. Given the limited resources of this initial study for San Diego Bay, the focus was on maximizing the use of existing data and information. The approach developed included the statistical evaluation of measured atmospheric PAH concentrations in the San Diego area, the extrapolation of EPA study results of atmospheric PAH concentrations above Lake Michigan to supplement the San Diego data, the estimation of dry and wet deposition with published calculation methods considering local wind and rainfall data, and the comparison of resulting PAH deposition estimates for San Diego Bay with estimated PAH emissions from ship and commercial boat activity in the San Diego area. The resulting PAH deposition and ship emission estimates were within the same order of magnitude. Since a significant contributor to the atmospheric deposition of PAHs to the Bay is expected to be from shipping traffic, this result provides a check on the order of magnitude on the PAH deposition estimate. Also, when compared against initial estimates of PAH loading to San Diego Bay from other environmental pathways, the atmospheric deposition pathway appears to be a significant contributor.« less

  14. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  15. Empirical Allometric Models to Estimate Total Needle Biomass For Loblolly Pine

    Treesearch

    Hector M. de los Santos-Posadas; Bruce E. Borders

    2002-01-01

    Empirical geometric models based on the cone surface formula were adapted and used to estimate total dry needle biomass (TNB) and live branch basal area (LBBA). The results suggest that the empirical geometric equations produced good fit and stable parameters while estimating TNB and LBBA. The data used include trees form a spacing study of 12 years old and a set of...

  16. 45 CFR 160.548 - Appeal of the ALJ's decision.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... extension within the initial 30 day period and shows good cause. (b) If a party files a timely notice of... such hearing is relevant and material and that there were reasonable grounds for the failure to adduce... the Board for good cause shown. Reply briefs are not permitted. (4) The Board must rule on the motion...

  17. Singing in Primary Schools: Case Studies of Good Practice in Whole Class Vocal Tuition

    ERIC Educational Resources Information Center

    Lamont, Alexandra; Daubney, Alison; Spruce, Gary

    2012-01-01

    Within the context of British initiatives in music education such as the Wider Opportunities programme in England and the recommendations of the Music Manifesto emphasising the importance of singing in primary schools, the current paper explores examples of good practice in whole-class vocal tuition. The research included seven different primary…

  18. Loss of Response to Melatonin Treatment Is Associated with Slow Melatonin Metabolism

    ERIC Educational Resources Information Center

    Braam, W.; van Geijlswijk, I.; Keijzer, Henry; Smits, Marcel G.; Didden, Robert; Curfs, Leopold M. G.

    2010-01-01

    Background: In some of our patients with intellectual disability (ID) and sleep problems, the initial good response to melatonin disappeared within a few weeks after starting treatment, while the good response returned only after considerable dose reduction. The cause for this loss of response to melatonin is yet unknown. We hypothesise that this…

  19. The Pollyanna Principle in Business Writing: Initial Results, Suggestions for Research.

    ERIC Educational Resources Information Center

    Hildebrandt, Herbert W.

    A study was conducted to determine whether there was a linguistic correlation between a financially good year and a bad year as expressed in the annual reports of company presidents to their shareholders. Specifically the study tested the "Pollyanna Principle," which states (1) that regardless of whether the year was financially good or bad, the…

  20. Differential Effects of Context and Feedback on Orthographic Learning: How Good Is Good Enough?

    ERIC Educational Resources Information Center

    Martin-Chang, Sandra; Ouellette, Gene; Bond, Linda

    2017-01-01

    In this study, students in Grade 2 read different sets of words under 4 experimental training conditions (context/feedback, isolation/feedback, context/no-feedback, isolation/no-feedback). Training took place over 10 trials, followed by a spelling test and a delayed reading posttest. Reading in context boosted reading accuracy initially; in…

  1. Leading a Community of Learners: Learning to Be Moral by Engaging the Morality of Learning

    ERIC Educational Resources Information Center

    Starratt, Robert J.

    2007-01-01

    This article attempts to provide a foundational understanding of school learning as moral activity as well as intellectual activity. It first develops a distinction between general ethics and professional ethics, and provides an initial explanation of the moral good involved in learning. The moral good of learning is then connected to the…

  2. Metaphor for Teaching: Good Teaching Is Like Good Sex

    ERIC Educational Resources Information Center

    Delgado, Teresa

    2015-01-01

    Based on a real teaching experience in the classroom, the author reflects on the dynamics of gender, race/ethnicity, power, and privilege in the context of an undergraduate course in Christian sexual ethics. Through this analysis of pedagogical style and process initiated by a challenging moment at the midpoint of the semester, the author develops…

  3. Negative Emotions and Alcohol Use Initiation in High-Risk Boys: The Moderating Effect of Good Inhibitory Control

    ERIC Educational Resources Information Center

    Pardini, Dustin; Lochman, John; Wells, Karen

    2004-01-01

    Studies on the relation between negative affect and later alcohol use have provided mixed results. Because definitions of negative affect often include diverse emotions, researchers have begun to dismantle this higher-order construct in an attempt to explain these inconsistent findings. More recent evidence also indicates that good inhibitory…

  4. 10 CFR 2.317 - Separate hearings; consolidation of proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... (a) Separate hearings. On motion by the parties or upon request of the presiding officer for good... it is found that the action will be conducive to the proper dispatch of its business and to the ends...) Consolidation of proceedings. On motion and for good cause shown or on its own initiative, the Commission or the...

  5. Addressing the Challenge of Diversity in the Graduate Ranks: Good Practices Yield Good Outcomes

    ERIC Educational Resources Information Center

    Thompson, Nancy L.; Campbell, Andrew G.

    2013-01-01

    In this paper, we examine the impact of implementing three systemic practices on the diversity and institutional culture in biomedical and public health PhD training at Brown University. We hypothesized that these practices, designed as part of the National Institutes of Health-funded Initiative to Maximize Student Development (IMSD) program in…

  6. An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Zavala, Victor M.

    We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less

  7. NASA Instrument Cost/Schedule Model

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  8. Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.

    PubMed

    King, Leandra; Wakeley, John

    2016-09-01

    We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.

  9. Fast and unbiased estimator of the time-dependent Hurst exponent.

    PubMed

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  10. Fast and unbiased estimator of the time-dependent Hurst exponent

    NASA Astrophysics Data System (ADS)

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  11. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  12. Global estimation of ocean tides in deep and shallow waters from TOPEX/POSEIDON and numerical models with applications to geophysics, oceanography, and precision altimetry

    NASA Astrophysics Data System (ADS)

    Tierney, Craig Cristy

    Presented here are several investigations of ocean tides derived from TOPEX/POSEIDON (T/P) altimetry and numerical models. The purpose of these investigations is to study the short wavelength features in the T/P data and to preserve these wavelengths in global ocean tide models that are accurate in shallow and deep waters. With these new estimates, effects of the tides on loading, Earth's rotation, and tidal energetics are studied. To preserve tidal structure, tides have been estimated along the ground track of T/P by the harmonic and response methods using 4.5 years of data. Results show the two along-track (AT) estimates agree with each other and with other tide models for those components with minimal aliasing problems. Comparisons to global models show that there is tidal structure in the T/P data that is not preserved with current gridding methods. Error estimates suggest there is accurate information in the T/P data from shallow waters that can be used to improve tidal models. It has been shown by Ray and Mitchum (1996) that the first mode baroclinic tide can be separated from AT tide estimates by filtering. This method has been used to estimate the first mode semidiurnal baroclinic tides globally. Estimates for M2 show good correlation with known regions of baroclinic tide generation. Using gridded, filtered AT estimates, a lower bound on the energy contained in the M2 baroclinic tide is 50 PJ. Inspired by the structure found in the AT estimates, a gridding method is presented that preserves tidal structure in the T/P data. These estimates are assimilated into a nonlinear, finite difference, global barotropic tidal model. Results from the 8 major tidal constituents show the model performs equivalently to other models in the deep waters, and is significantly better in the shallow waters. Crossover variance is reduced from 14 cm to 10 cm in the shallow waters. Comparisons to Earth rotation show good agreement to results from VLBI data. Tidal energetics computed from the models show good agreement with previous results. PE/KE ratios and quality factors are more consistent in each frequency band than in previous results.

  13. Ninety-Degree Chevron Osteotomy for Correction of Hallux Valgus Deformity: Clinical Data and Finite Element Analysis

    PubMed Central

    Matzaroglou, Charalambos; Bougas, Panagiotis; Panagiotopoulos, Elias; Saridis, Alkis; Karanikolas, Menelaos; Kouzoudis, Dimitris

    2010-01-01

    Hallux valgus is a very common foot disorder, with its prevalence estimated at 33% in adult shoe-wearing populations. Conservative management is the initial treatment of choice for this condition, but surgery is sometimes needed. The 600 angle Chevron osteotomy is an accepted method for correction of mild to moderate hallux valgus in adults less than 60 years old. A modified 900 angle Chevron osteotomy has also been described; this modified technique can confer some advantages compared to the 600 angle method, and reported results are good. In the current work we present clinical data from a cohort of fifty-one female patients who had surgery for sixty-two hallux valgus deformities. In addition, in order to get a better physical insight and study the mechanical stresses along the two osteotomies, Finite Element Analysis (FEA) was also conducted. FEA indicated enhanced mechanical bonding with the modified 900 Chevron osteotomy, because the compressive stresses that keep the two bone parts together are stronger, and the shearing stresses that tend to slide the two bone parts apart are weaker, compared to the typical 600 technique. Follow-up data on our patient cohort show good or excellent long-term clinical results with the modified 900 angle technique. These results are consistent with the FEA-based hypothesis that a 900 Chevron osteotomy confers certain mechanical advantages compared to the typical 600 procedure. PMID:20648223

  14. Analytical monitoring of soil bioengineering structures in the Tuscan Emilian Apennines of Italy

    NASA Astrophysics Data System (ADS)

    Selli, Lavinia; Guastini, Enrico

    2014-05-01

    Soil bioengineering has been an appropriate solution to deal with erosion problems and shallow landslides in the North Apennines, Italy. The objective of our research was a check about critical aspects of soil bioengineering works. We monitored the works that have been carried out in the Tuscan Emilian Apennines by testing the suitability of different plant species and analyzed in detail timber structures of wooden crib walls. Plant species were mainly Salix alba and Salix purpurea that gave good sprouting and survival rates. However, showed some issues in growing on dry and sunny Apennine lands, where other shrubs like Spanish Broom, blackthorn, cornel-tree and Eglantine would be more indicated. The localized analysis on wooden elements has been led gathering parts from the poles and obtaining samples in order to determine their density. The hypothetical initial density of the wood used in the structure has been estimated, then calculating the residual density. This analysis allows us to determine the general condition of the wood, highlighting the structures in worst condition (the one in Pianaccio show a residual density close to 70%, instead of 90% as found on other structures) and those whose degraded wood has undergone the greatest damage (Pianaccio here too, with 50%, followed by Campoferrario - 60% - and by Pian di Favale with 85%, a rather good value for the most degraded wood in the structure).

  15. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  16. Vehicle speed affects both pre-skid braking kinematics and average tire/roadway friction.

    PubMed

    Heinrichs, Bradley E; Allin, Boyd D; Bowler, James J; Siegmund, Gunter P

    2004-09-01

    Vehicles decelerate between brake application and skid onset. To better estimate a vehicle's speed and position at brake application, we investigated how vehicle deceleration varied with initial speed during both the pre-skid and skidding intervals on dry asphalt. Skid-to-stop tests were performed from four initial speeds (20, 40, 60, and 80 km/h) using three different grades of tire (economy, touring, and performance) on a single vehicle and a single road surface. Average skidding friction was found to vary with initial speed and tire type. The post-brake/pre-skid speed loss, elapsed time, distance travelled, and effective friction were found to vary with initial speed. Based on these data, a method using skid mark length to predict vehicle speed and position at brake application rather than skid onset was shown to improve estimates of initial vehicle speed by up to 10 km/h and estimates of vehicle position at brake application by up to 8 m compared to conventional methods that ignore the post-brake/pre-skid interval. Copyright 2003 Elsevier Ltd.

  17. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  18. Development and pilot test of a new set of good practice indicators for chronic cancer pain management.

    PubMed

    Saturno, P J; Martinez-Nicolas, I; Robles-Garcia, I S; López-Soriano, F; Angel-García, D

    2015-01-01

    Pain is among the most important symptoms in terms of prevalence and cause of distress for cancer patients and their families. However, there is a lack of clearly defined measures of quality pain management to identify problems and monitor changes in improvement initiatives. We built a comprehensive set of evidence-based indicators following a four-step model: (1) review and systematization of existing guidelines to list evidence-based recommendations; (2) review and systematization of existing indicators matching the recommendations; (3) development of new indicators to complete a set of measures for the identified recommendations; and (4) pilot test (in hospital and primary care settings) for feasibility, reliability (kappa), and usefulness for the identification of quality problems using the lot quality acceptance sampling (LQAS) method and estimates of compliance. Twenty-two indicators were eventually pilot tested. Seventeen were feasible in hospitals and 12 in all settings. Feasibility barriers included difficulties in identifying target patients, deficient clinical records and low prevalence of cases for some indicators. Reliability was mostly very good or excellent (k > 0.8). Four indicators, all of them related to medication and prevention of side effects, had acceptable compliance at 75%/40% LQAS level. Other important medication-related indicators (i.e., adjustment to pain intensity, prescription for breakthrough pain) and indicators concerning patient-centred care (i.e., attention to psychological distress and educational needs) had very low compliance, highlighting specific quality gaps. A set of good practice indicators has been built and pilot tested as a feasible, reliable and useful quality monitoring tool, and underscoring particular and important areas for improvement. © 2014 European Pain Federation - EFIC®

  19. Functional Status Score for the Intensive Care Unit (FSS-ICU): An International Clinimetric Analysis of Validity, Responsiveness, and Minimal Important Difference

    PubMed Central

    Huang, Minxuan; Chan, Kitty S.; Zanni, Jennifer M.; Parry, Selina M.; Neto, Saint-Clair G. B.; Neto, Jose A. A.; da Silva, Vinicius Z. M.; Kho, Michelle E.; Needham, Dale M.

    2017-01-01

    Objective To evaluate the internal consistency, validity, responsiveness, and minimal important difference of the Functional Status Score for the Intensive Care Unit (FSS-ICU), a physical function measure designed for the intensive care unit (ICU). Design Clinimetric analysis. Settings Five international data sets from the United States, Australia, and Brazil. Patients 819 ICU patients. Intervention None. Measurements and Main Results Clinimetric analyses were initially conducted separately for each data source and time point to examine generalizability of findings, with pooled analyses performed thereafter to increase power of analyses. The FSS-ICU demonstrated good to excellent internal consistency. There was good convergent and discriminant validity, with significant and positive correlations (r = 0.30 to 0.95) between FSS-ICU and other physical function measures, and generally weaker correlations with non-physical measures (|r| = 0.01 to 0.70). Known group validity was demonstrated by significantly higher FSS-ICU scores among patients without ICU-acquired weakness (Medical Research Council sumscore ≥48 versus <48) and with hospital discharge to home (versus healthcare facility). FSS-ICU at ICU discharge predicted post-ICU hospital length of stay and discharge location. Responsiveness was supported via increased FSS-ICU scores with improvements in muscle strength. Distribution-based methods indicated a minimal important difference of 2.0 to 5.0. Conclusions The FSS-ICU has good internal consistency and is a valid and responsive measure of physical function for ICU patients. The estimated minimal important difference can be used in sample size calculations and in interpreting studies comparing the physical function of groups of ICU patients. PMID:27488220

  20. Good soldiers and good actors: prosocial and impression management motives as interactive predictors of affiliative citizenship behaviors.

    PubMed

    Grant, Adam M; Mayer, David M

    2009-07-01

    Researchers have discovered inconsistent relationships between prosocial motives and citizenship behaviors. We draw on impression management theory to propose that impression management motives strengthen the association between prosocial motives and affiliative citizenship by encouraging employees to express citizenship in ways that both "do good" and "look good." We report 2 studies that examine the interactions of prosocial and impression management motives as predictors of affiliative citizenship using multisource data from 2 different field samples. Across the 2 studies, we find positive interactions between prosocial and impression management motives as predictors of affiliative citizenship behaviors directed toward other people (helping and courtesy) and the organization (initiative). Study 2 also shows that only prosocial motives predict voice-a challenging citizenship behavior. Our results suggest that employees who are both good soldiers and good actors are most likely to emerge as good citizens in promoting the status quo.

Top