Chang, Hsien-Yen; Weiner, Jonathan P
2010-01-18
Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory power of claims-based risk adjustment models over demographics-only models, Taiwan's government should consider using claims-based models for policy-relevant applications. The performance of the ACG case-mix system in Taiwan was comparable to that found in other countries. This suggested that the ACG system could be applied to Taiwan's NHI even though it was originally developed in the USA. Many of the findings in this paper are likely to be relevant to other diagnosis-based risk adjustment methodologies.
Code of Federal Regulations, 2010 CFR
2016-10-01
...-based payment adjustment under the Home Health Value-Based Purchasing (HHVBP) Model. § 484.330 Section... (HHVBP) Model Components for Competing Home Health Agencies Within State Boundaries § 484.330 Process for determining and applying the value-based payment adjustment under the Home Health Value-Based Purchasing...
Code of Federal Regulations, 2010 CFR
2017-10-01
...-based payment adjustment under the Home Health Value-Based Purchasing (HHVBP) Model. § 484.330 Section... (HHVBP) Model Components for Competing Home Health Agencies Within State Boundaries § 484.330 Process for determining and applying the value-based payment adjustment under the Home Health Value-Based Purchasing...
Tedeschi, L O; Seo, S; Fox, D G; Ruiz, R
2006-12-01
Current ration formulation systems used to formulate diets on farms and to evaluate experimental data estimate metabolizable energy (ME)-allowable and metabolizable protein (MP)-allowable milk production from the intake above animal requirements for maintenance, pregnancy, and growth. The changes in body reserves, measured via the body condition score (BCS), are not accounted for in predicting ME and MP balances. This paper presents 2 empirical models developed to adjust predicted diet-allowable milk production based on changes in BCS. Empirical reserves model 1 was based on the reserves model described by the 2001 National Research Council (NRC) Nutrient Requirements of Dairy Cattle, whereas empirical reserves model 2 was developed based on published data of body weight and composition changes in lactating dairy cows. A database containing 134 individually fed lactating dairy cows from 3 trials was used to evaluate these adjustments in milk prediction based on predicted first-limiting ME or MP by the 2001 Dairy NRC and Cornell Net Carbohydrate and Protein System models. The analysis of first-limiting ME or MP milk production without adjustments for BCS changes indicated that the predictions of both models were consistent (r(2) of the regression between observed and model-predicted values of 0.90 and 0.85), had mean biases different from zero (12.3 and 5.34%), and had moderate but different roots of mean square errors of prediction (5.42 and 4.77 kg/d) for the 2001 NRC model and the Cornell Net Carbohydrate and Protein System model, respectively. The adjustment of first-limiting ME- or MP-allowable milk to BCS changes improved the precision and accuracy of both models. We further investigated 2 methods of adjustment; the first method used only the first and last BCS values, whereas the second method used the mean of weekly BCS values to adjust ME- and MP-allowable milk production. The adjustment to BCS changes based on first and last BCS values was more accurate than the adjustment to BCS based on the mean of all BCS values, suggesting that adjusting milk production for mean weekly variations in BCS added more variability to model-predicted milk production. We concluded that both models adequately predicted the first-limiting ME- or MP-allowable milk after adjusting for changes in BCS.
Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong
2017-12-28
Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.
Health-Based Capitation Risk Adjustment in Minnesota Public Health Care Programs
Gifford, Gregory A.; Edwards, Kevan R.; Knutson, David J.
2004-01-01
This article documents the history and implementation of health-based capitation risk adjustment in Minnesota public health care programs, and identifies key implementation issues. Capitation payments in these programs are risk adjusted using an historical, health plan risk score, based on concurrent risk assessment. Phased implementation of capitation risk adjustment for these programs began January 1, 2000. Minnesota's experience with capitation risk adjustment suggests that: (1) implementation can accelerate encounter data submission, (2) administrative decisions made during implementation can create issues that impact payment model performance, and (3) changes in diagnosis data management during implementation may require changes to the payment model. PMID:25372356
Gómez, Fátima Somovilla; Lorza, Rubén Lostado; Bobadilla, Marina Corral; García, Rubén Escribano
2017-09-21
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3-L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust.
Somovilla Gómez, Fátima
2017-01-01
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust. PMID:28934161
NASA Astrophysics Data System (ADS)
Darko, Deborah; Adjei, Kwaku A.; Appiah-Adjei, Emmanuel K.; Odai, Samuel N.; Obuobie, Emmanuel; Asmah, Ruby
2018-06-01
The extent to which statistical bias-adjusted outputs of two regional climate models alter the projected change signals for the mean (and extreme) rainfall and temperature over the Volta Basin is evaluated. The outputs from two regional climate models in the Coordinated Regional Climate Downscaling Experiment for Africa (CORDEX-Africa) are bias adjusted using the quantile mapping technique. Annual maxima rainfall and temperature with their 10- and 20-year return values for the present (1981-2010) and future (2051-2080) climates are estimated using extreme value analyses. Moderate extremes are evaluated using extreme indices (viz. percentile-based, duration-based, and intensity-based). Bias adjustment of the original (bias-unadjusted) models improves the reproduction of mean rainfall and temperature for the present climate. However, the bias-adjusted models poorly reproduce the 10- and 20-year return values for rainfall and maximum temperature whereas the extreme indices are reproduced satisfactorily for the present climate. Consequently, projected changes in rainfall and temperature extremes were weak. The bias adjustment results in the reduction of the change signals for the mean rainfall while the mean temperature signals are rather magnified. The projected changes for the original mean climate and extremes are not conserved after bias adjustment with the exception of duration-based extreme indices.
Prinsze, Femmeke J; van Vliet, René C J A
Since 1991, risk-adjusted premium subsidies have existed in the Dutch social health insurance sector, which covered about two-thirds of the population until 2006. In 2002, pharmacy-based cost groups (PCGs) were included in the demographic risk adjustment model, which improved the goodness-of-fit, as measured by the R2, to 11.5%. The model's R2 reached 22.8% in 2004, when inpatient diagnostic information was added in the form of diagnostic cost groups (DCGs). PCGs and DCGs appear to be complementary in their ability to predict future costs. PCGs particularly improve the R2 for outpatient expenses, whereas DCGs improve the R2 for inpatient expenses. In 2006, this system of risk-adjusted premium subsidies was extended to cover the entire population.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen
2016-08-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Risk adjustment alternatives in paying for behavioral health care under Medicaid.
Ettner, S L; Frank, R G; McGuire, T G; Hermann, R C
2001-01-01
OBJECTIVE: To compare the performance of various risk adjustment models in behavioral health applications such as setting mental health and substance abuse (MH/SA) capitation payments or overall capitation payments for populations including MH/SA users. DATA SOURCES/STUDY DESIGN: The 1991-93 administrative data from the Michigan Medicaid program were used. We compared mean absolute prediction error for several risk adjustment models and simulated the profits and losses that behavioral health care carve outs and integrated health plans would experience under risk adjustment if they enrolled beneficiaries with a history of MH/SA problems. Models included basic demographic adjustment, Adjusted Diagnostic Groups, Hierarchical Condition Categories, and specifications designed for behavioral health. PRINCIPAL FINDINGS: Differences in predictive ability among risk adjustment models were small and generally insignificant. Specifications based on relatively few MH/SA diagnostic categories did as well as or better than models controlling for additional variables such as medical diagnoses at predicting MH/SA expenditures among adults. Simulation analyses revealed that among both adults and minors considerable scope remained for behavioral health care carve outs to make profits or losses after risk adjustment based on differential enrollment of severely ill patients. Similarly, integrated health plans have strong financial incentives to avoid MH/SA users even after adjustment. CONCLUSIONS: Current risk adjustment methodologies do not eliminate the financial incentives for integrated health plans and behavioral health care carve-out plans to avoid high-utilizing patients with psychiatric disorders. PMID:11508640
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
Risk adjustment models for short-term outcomes after surgical resection for oesophagogastric cancer.
Fischer, C; Lingsma, H; Hardwick, R; Cromwell, D A; Steyerberg, E; Groene, O
2016-01-01
Outcomes for oesophagogastric cancer surgery are compared with the aim of benchmarking quality of care. Adjusting for patient characteristics is crucial to avoid biased comparisons between providers. The study objective was to develop a case-mix adjustment model for comparing 30- and 90-day mortality and anastomotic leakage rates after oesophagogastric cancer resections. The study reviewed existing models, considered expert opinion and examined audit data in order to select predictors that were consequently used to develop a case-mix adjustment model for the National Oesophago-Gastric Cancer Audit, covering England and Wales. Models were developed on patients undergoing surgical resection between April 2011 and March 2013 using logistic regression. Model calibration and discrimination was quantified using a bootstrap procedure. Most existing risk models for oesophagogastric resections were methodologically weak, outdated or based on detailed laboratory data that are not generally available. In 4882 patients with oesophagogastric cancer used for model development, 30- and 90-day mortality rates were 2·3 and 4·4 per cent respectively, and 6·2 per cent of patients developed an anastomotic leak. The internally validated models, based on predictors selected from the literature, showed moderate discrimination (area under the receiver operating characteristic (ROC) curve 0·646 for 30-day mortality, 0·664 for 90-day mortality and 0·587 for anastomotic leakage) and good calibration. Based on available data, three case-mix adjustment models for postoperative outcomes in patients undergoing curative surgery for oesophagogastric cancer were developed. These models should be used for risk adjustment when assessing hospital performance in the National Health Service, and tested in other large health systems. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.
Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster
NASA Astrophysics Data System (ADS)
Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song
2015-02-01
The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.
Rutter, Martin K.; Massaro, Joseph M.; Hoffmann, Udo; O’Donnell, Christopher J.; Fox, Caroline S.
2012-01-01
OBJECTIVE Our objective was to assess whether impaired fasting glucose (IFG) and obesity are independently related to coronary artery calcification (CAC) in a community-based population. RESEARCH DESIGN AND METHODS We assessed CAC using multidetector computed tomography in 3,054 Framingham Heart Study participants (mean [SD] age was 50 [10] years, 49% were women, 29% had IFG, and 25% were obese) free from known vascular disease or diabetes. We tested the hypothesis that IFG (5.6–6.9 mmol/L) and obesity (BMI ≥30 kg/m2) were independently associated with high CAC (>90th percentile for age and sex) after adjusting for hypertension, lipids, smoking, and medication. RESULTS High CAC was significantly related to IFG in an age- and sex-adjusted model (odds ratio 1.4 [95% CI 1.1–1.7], P = 0.002; referent: normal fasting glucose) and after further adjustment for obesity (1.3 [1.0–1.6], P = 0.045). However, IFG was not associated with high CAC in multivariable-adjusted models before (1.2 [0.9–1.4], P = 0.20) or after adjustment for obesity. Obesity was associated with high CAC in age- and sex-adjusted models (1.6 [1.3–2.0], P < 0.001) and in multivariable models that included IFG (1.4 [1.1–1.7], P = 0.005). Multivariable-adjusted spline regression models suggested nonlinear relationships linking high CAC with BMI (J-shaped), waist circumference (J-shaped), and fasting glucose. CONCLUSIONS In this community-based cohort, CAC was associated with obesity, but not IFG, after adjusting for important confounders. With the increasing worldwide prevalence of obesity and nondiabetic hyperglycemia, these data underscore the importance of obesity in the pathogenesis of CAC. PMID:22773705
Rutter, Martin K; Massaro, Joseph M; Hoffmann, Udo; O'Donnell, Christopher J; Fox, Caroline S
2012-09-01
Our objective was to assess whether impaired fasting glucose (IFG) and obesity are independently related to coronary artery calcification (CAC) in a community-based population. We assessed CAC using multidetector computed tomography in 3,054 Framingham Heart Study participants (mean [SD] age was 50 [10] years, 49% were women, 29% had IFG, and 25% were obese) free from known vascular disease or diabetes. We tested the hypothesis that IFG (5.6-6.9 mmol/L) and obesity (BMI ≥30 kg/m(2)) were independently associated with high CAC (>90th percentile for age and sex) after adjusting for hypertension, lipids, smoking, and medication. High CAC was significantly related to IFG in an age- and sex-adjusted model (odds ratio 1.4 [95% CI 1.1-1.7], P = 0.002; referent: normal fasting glucose) and after further adjustment for obesity (1.3 [1.0-1.6], P = 0.045). However, IFG was not associated with high CAC in multivariable-adjusted models before (1.2 [0.9-1.4], P = 0.20) or after adjustment for obesity. Obesity was associated with high CAC in age- and sex-adjusted models (1.6 [1.3-2.0], P < 0.001) and in multivariable models that included IFG (1.4 [1.1-1.7], P = 0.005). Multivariable-adjusted spline regression models suggested nonlinear relationships linking high CAC with BMI (J-shaped), waist circumference (J-shaped), and fasting glucose. In this community-based cohort, CAC was associated with obesity, but not IFG, after adjusting for important confounders. With the increasing worldwide prevalence of obesity and nondiabetic hyperglycemia, these data underscore the importance of obesity in the pathogenesis of CAC.
Diagnosis-Based Risk Adjustment for Medicare Capitation Payments
Ellis, Randall P.; Pope, Gregory C.; Iezzoni, Lisa I.; Ayanian, John Z.; Bates, David W.; Burstin, Helen; Ash, Arlene S.
1996-01-01
Using 1991-92 data for a 5-percent Medicare sample, we develop, estimate, and evaluate risk-adjustment models that utilize diagnostic information from both inpatient and ambulatory claims to adjust payments for aged and disabled Medicare enrollees. Hierarchical coexisting conditions (HCC) models achieve greater explanatory power than diagnostic cost group (DCG) models by taking account of multiple coexisting medical conditions. Prospective models predict average costs of individuals with chronic conditions nearly as well as concurrent models. All models predict medical costs far more accurately than the current health maintenance organization (HMO) payment formula. PMID:10172666
Projecting School Psychology Staffing Needs Using a Risk-Adjusted Model.
ERIC Educational Resources Information Center
Stellwagen, Kurt
A model is proposed to project optimal school psychology service ratios based upon the percentages of at risk students enrolled within a given school population. Using the standard 1:1,000 service ratio advocated by The National Association of School Psychologists (NASP) as a starting point, ratios are then adjusted based upon the size of three…
Boehm, K. -J.; Gibson, C. R.; Hollaway, J. R.; ...
2016-09-01
This study presents the design of a flexure-based mount allowing adjustment in three rotational degrees of freedom (DOFs) through high-precision set-screw actuators. The requirements of the application called for small but controlled angular adjustments for mounting a cantilevered beam. The proposed design is based on an array of parallel beams to provide sufficiently high stiffness in the translational directions while allowing angular adjustment through the actuators. A simplified physical model in combination with standard beam theory was applied to estimate the deflection profile and maximum stresses in the beams. A finite element model was built to calculate the stresses andmore » beam profiles for scenarios in which the flexure is simultaneously actuated in more than one DOF.« less
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng
2016-11-01
Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.
Tang, Yongqiang
2017-12-01
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.
2014-01-01
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind
2014-08-15
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less
Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...
2017-10-06
A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik
A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A
2014-06-15
Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less
NASA Astrophysics Data System (ADS)
Smith, C. J.; Forster, P.; Richardson, T.; Myhre, G.
2016-12-01
Effective radiative forcing (ERF), rather than "traditional" radiative forcing (RF), has become an increasingly popular metric in recent years, as it more closely links the difference in the earth's top-of-atmosphere (TOA) energy budget to equilibrium near-surface temperature rise. One method to diagnose ERF is to take the difference of TOA radiative fluxes from two climate model runs (a perturbation and a control) with prescribed sea-surface temperatures and sea-ice coverage. ERF can be thought of as the sum of a direct forcing, which is the pure radiative effect of a forcing agent, plus rapid adjustments, which are changes in climate state triggered by the forcing agent that themselves affect the TOA energy budget and are unrelated to surface temperature changes.In addition to the classic experiment of doubling of CO2 (2xCO2), we analyse rapid adjustments to a tripling of methane (3xCH4), a quintupling of sulphate aerosol (5xSul), a ten times increase in black carbon (10xBC) and a 2% increase in the solar constant (2%Sol). We use CMIP-style climate model diagnostics from six participating models of the Precipitation Driver Response Model Intercomparison Project (PDRMIP).Assuming approximately linear contributions to the TOA flux differences, the rapid adjustments from changes in atmospheric temperature, surface temperature, surface albedo and water vapour can be cleanly and simply separated from the direct forcing by radiative kernels. The rapid adjustments are in turn decomposed into stratospheric and tropospheric components. We introduce kernels based on the HadGEM2 climate model and find similar results to those based on other models. Cloud adjustments are evaluated as a residual of the TOA radiative fluxes between all-sky and clear-sky runs once direct forcing and rapid adjustments have been subtracted. The cloud adjustments are also calculated online within the HadGEM2 model using the ISCCP simulator. For aerosol forcing experiments, rapid adjustments vary substantially between models. Much of the contribution to this model spread is in the cloud adjustments. We also notice a spread in the model calculations of direct forcing for greenhouse gases, which suggest differences in the radiative transfer parameterisations used by each model.
Liu, Chuan-Fen; Sales, Anne E; Sharp, Nancy D; Fishman, Paul; Sloan, Kevin L; Todd-Stenberg, Jeff; Nichol, W Paul; Rosen, Amy K; Loveland, Susan
2003-01-01
Objective To compare the rankings for health care utilization performance measures at the facility level in a Veterans Health Administration (VHA) health care delivery network using pharmacy- and diagnosis-based case-mix adjustment measures. Data Sources/Study Setting The study included veterans who used inpatient or outpatient services in Veterans Integrated Service Network (VISN) 20 during fiscal year 1998 (October 1997 to September 1998; N=126,076). Utilization and pharmacy data were extracted from VHA national databases and the VISN 20 data warehouse. Study Design We estimated concurrent regression models using pharmacy or diagnosis information in the base year (FY1998) to predict health service utilization in the same year. Utilization measures included bed days of care for inpatient care and provider visits for outpatient care. Principal Findings Rankings of predicted utilization measures across facilities vary by case-mix adjustment measure. There is greater consistency within the diagnosis-based models than between the diagnosis- and pharmacy-based models. The eight facilities were ranked differently by the diagnosis- and pharmacy-based models. Conclusions Choice of case-mix adjustment measure affects rankings of facilities on performance measures, raising concerns about the validity of profiling practices. Differences in rankings may reflect differences in comparability of data capture across facilities between pharmacy and diagnosis data sources, and unstable estimates due to small numbers of patients in a facility. PMID:14596393
Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.
Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng
2016-12-08
This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Walendzik, A; Trottmann, M; Leonhardt, R; Wasem, J
2013-04-01
In the 2009 reform of the German collective remuneration system for outpatient medical care, on the level of overall remuneration, the morbidity risk was transferred to the health funds fulfilling a long-term demand of physicians. Nevertheless not transferring morbidity adjustment to the levels of physician groups and singular practices can lead to budgets not related to patient needs and to incentives for risk selection for individual doctors. The systematics of the distribution of overall remuneration in the German remuneration system for outpatient care are analysed focusing on the aspect of morbidity adjustment. Using diagnostic and pharmaceutical information of about half a million insured subjects, a risk adjustment model able to predict individual expenditures for outpatient care for different provider groups is presented. This model enables to additively split the individual care burden into several parts attributed to different physician groups. Conditions for the use of the model in the distribution of overall remuneration between physician groups are developed. A simulation of the use of diagnoses-based risk adjustment in standard service volumes then highlights the conditions for a successfull installation of standard service volumes representing a higher degree of risk adjustment. The presented estimation model is generally applicable for the distribution of overall remuneration to different physician groups. The simulation of standard service volumes using diagnosis-based risk adjustment does not provide a more accurate prediction of the expenditures on the level of physician practices than the age-related calculation currently used in the German remuneration system for outpatient medical care. Using elements of morbidity-based risk adjustment the current German collective system for outpatient medical care could be transformed towards a higher degree of distributional justice concerning medical care for patients and more appropriate incentives avoiding risk selection. Limitations of the applicability of risk-adjustment can be especially pointed out when a high share of lump-sum-payments is used for the remuneration of some physician groups. © Georg Thieme Verlag KG Stuttgart · New York.
Code of Federal Regulations, 2010 CFR
2017-10-01
... Purchasing (HHVBP) Model. CMS will determine a payment adjustment up to the maximum applicable percentage... Total Performance Score using a linear exchange function. Payment adjustments made under the HHVBP Model... 42 Public Health 5 2017-10-01 2017-10-01 false Payments for home health services under Home Health...
Code of Federal Regulations, 2010 CFR
2016-10-01
... Purchasing (HHVBP) Model. CMS will determine a payment adjustment up to the maximum applicable percentage... Total Performance Score using a linear exchange function. Payment adjustments made under the HHVBP Model... 42 Public Health 5 2016-10-01 2016-10-01 false Payments for home health services under Home Health...
Stromal-epithelial dynamics in response to fractionated radiotherapy
NASA Astrophysics Data System (ADS)
Rong, Panying
The speech of individuals with velopharyngeal incompetency (VPI) is characterized by hypernasality, a speech quality related to excessive emission of acoustic energy through the nose, as caused by failure of velopharyngeal closure. As an attempt to reduce hypernasality and, in turn, improve the quality of VPI-related hypernasal speech, this study is dedicated to developing an approach that uses speech-dependent articulatory adjustments to reduce hypernasality caused by excessive velopharyngeal opening. A preliminary study has been done to derive such articulatory adjustments for hypernasal /i/ vowels based on the simulation of an articulatorymodel (Speech Processing and Synthesis Toolboxes, Childers (2000)). Both nasal /i/ vowels with and without articulatory adjustments were synthesized by the model. Spectral analysis found that nasal acoustic features were attenuated and oral formant structures were restored after articulatory adjustments. In addition, comparisons of perceptual ratings of nasality between the two types of nasal vowels showed the articulatory adjustments generated by the model significantly reduced the perception of nasality for nasal /i/ vowels. Such articulatory adjustments for nasal /i/ have two patterns: 1) a consistent adjustment pattern, which corresponds an expansion at the velopharynx, and 2) some speech-dependent fine-tuning adjustment patterns, including adjustments in the lip area and the upper pharynx. The long-term goal of this study is to apply this approach of articulatory adjustment as a therapeutic tool in clinical speech treatment to detect and correct the maladaptive articulatory behaviors developed spontaneously by speakers with VPI on individual bases. This study constructed a speaker-adaptive articulatory model on the basis of the framework of Childers's vocal tract model to simulate articulatory adjustments aiming at compensating for the acoustic outcome caused by velopharyngeal opening and reducing nasality. To construct such a speaker-adaptive articulatory model, (1) an articulatory-acoustic-aerodynamic database was recorded using the articulography and aerodynamic instruments to provide point-wise articulatory data to be fitted into the framework of Childers's standard vocal tract model; (2) the length and transverse dimension of the vocal tract were adjusted to fit individual speaker by minimizing the acoustic discrepancy between the model simulation and the target derived from acoustic signal in the database using the simulated annealing algorithm; (3) the articulatory space of the model was adjusted to fit individual articulatory features by adapting the movement ranges of all articulators. With the speaker-adaptive articulatory model, the articulatory configurations of the oral and nasal vowels in the database were simulated and synthesized. Given the acoustic targets derived from the oral vowels in the database, speech-dependent articulatory adjustments were simulated to compensate for the acoustic outcome caused by VPO. The resultant articulatory configurations corresponds to nasal vowels with articulatory adjustment, which were synthesized to serve as the perceptual stimuli for a listening task of nasality rating. The oral and nasal vowels synthesized based on the oral and nasal vowel targets in the database also served as the perceptual stimuli. The results suggest both acoustic and perceptual effects of the mode-generated articulatory adjustment on the nasal vowels /a/, /i/ and /u/. In terms of acoustics, the articulatory adjustment (1) restores the altered formant structures due to nasal coupling, including shifted formant frequency, attenuated formant intensity and expanded formant bandwidth and (2) attenuates the peaks and zeros caused by nasal resonances. Perceptually, the articulatory adjustment generated by the speaker-adaptive model significantly reduces the perceived nasality for all three vowels (/a/, /i/, /u/). The acoustic and perceptual effects of articulatory adjustment suggest achievement of the acoustic goal of compensating for the acoustic discrepancy caused by VPO and the auditory goal of reducing the perception of nasality. Such a finding is consistent with motor equivalence (Hughes and Abbs, 1976; Maeda, 1990), which enables inter-articulator coordination to compensate for the deviation from the acoustic/auditory goal caused by the shifted position of an articulator. The articulatory adjustment responsible for the acoustic and perceptual effects as described above was decomposed into a set of empirical orthogonal modes (Story and Titze, 1998). Both gross articulatory patterns and fine-tuning adjustments were found in the principal orthogonal modes, which lead to the acoustic compensation and reduction of nasality. For /a/ and /i/, a direct relationship was found among the acoustic features, nasality, and articulatory adjustment patterns. Specifically, the articulatory adjustments indicated by the principal orthogonal modes of the adjusted nasal /a/ and /i/ were directly correlated with the attenuation of the acoustic cues of nasality (i.e., shifting of F1 and F2 frequencies) and the reduction of nasality rating. For /u/, such a direct relationship among the acoustic features, nasality and articulatory adjustment was not as prominent, suggesting the possibility of additional acoustic correlates of nasality other than F1 and F2. The findings of this study demonstrate the possibility of using articulatory adjustment to reduce the perception of nasality through model simulation. A speaker-adaptive articulatory model is able to simulate individual-based articulatory adjustment strategies that can be applied in clinical settings to serve as the articulatory targets for correction of the maladaptive articulatory behaviors developed spontaneously by speakers with hypernasal speech. Such a speaker-adaptive articulatory model provides an intuitive way of articulatory learning and self-training for speakers with VPI to learn appropriate articulatory strategies through model-speaker interaction.
Cartographic symbol library considering symbol relations based on anti-aliasing graphic library
NASA Astrophysics Data System (ADS)
Mei, Yang; Li, Lin
2007-06-01
Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.
Lai, Zhi-Hui; Leng, Yong-Gang
2015-08-28
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.
Risk-adjusted antibiotic consumption in 34 public acute hospitals in Ireland, 2006 to 2014
Oza, Ajay; Donohue, Fionnuala; Johnson, Howard; Cunney, Robert
2016-01-01
As antibiotic consumption rates between hospitals can vary depending on the characteristics of the patients treated, risk-adjustment that compensates for the patient-based variation is required to assess the impact of any stewardship measures. The aim of this study was to investigate the usefulness of patient-based administrative data variables for adjusting aggregate hospital antibiotic consumption rates. Data on total inpatient antibiotics and six broad subclasses were sourced from 34 acute hospitals from 2006 to 2014. Aggregate annual patient administration data were divided into explanatory variables, including major diagnostic categories, for each hospital. Multivariable regression models were used to identify factors affecting antibiotic consumption. Coefficient of variation of the root mean squared errors (CV-RMSE) for the total antibiotic usage model was very good (11%), however, the value for two of the models was poor (> 30%). The overall inpatient antibiotic consumption increased from 82.5 defined daily doses (DDD)/100 bed-days used in 2006 to 89.2 DDD/100 bed-days used in 2014; the increase was not significant after risk-adjustment. During the same period, consumption of carbapenems increased significantly, while usage of fluoroquinolones decreased. In conclusion, patient-based administrative data variables are useful for adjusting hospital antibiotic consumption rates, although additional variables should also be employed. PMID:27541730
Martinez Sanchez, MaryAnn
2018-01-01
Introduction Recent reviews have reinforced the notion that having a supportive spouse can help with the process of coping with and adjusting to cancer. Congruence between spouses’ perspectives has been proposed as one mechanism in that process, yet alternative models of congruence have not been examined closely. This study assessed alternative models of congruence in perceptions of coping and their mediating effects on adjustment to breast cancer. Methods Seventy-two women in treatment for breast cancer and their husbands completed measures of marital adjustment, self-efficacy for coping, and adjustment to cancer. Karnofsky Performance Status was obtained from medical records. Wives completed a measure of self-efficacy for coping (wives’ ratings of self-efficacy for coping [WSEC]) and husbands completed a measure of self-efficacy for coping (husbands’ ratings of wives’ self-efficacy for coping [HSEC]) based on their perceptions of their wives’ coping efficacy. Results Interestingly, the correlation between WSEC and HSEC was only 0.207; thus, they are relatively independent perspectives. The following three models were tested to determine the nature of the relationship between WSEC and HSEC: discrepancy model (WSEC − HSEC), additive model (WSEC + HSEC), and multiplicative model (WSEC × HSEC). The discrepancy model was not related to wives’ adjustment; however, the additive (B=0.205, P<0.001) and multiplicative (B=0.001, P<0.001) models were significantly related to wives’ adjustment. Also, the additive model mediated the relationship between performance status and adjustment. Discussion Husbands’ perception of their wives’ coping efficacy contributed marginally to their wives’ adjustment, and the combination of WSEC and HSEC mediated the relationship between functional status and wives’ adjustment, thus positively impacting wives’ adjustment to cancer. Future research is needed to determine the quality of the differences between HSEC and WSEC in order to develop interventions to optimize the impact of these two relatively independent perspectives on cancer outcomes. PMID:29491720
Merluzzi, Thomas V; Martinez Sanchez, MaryAnn
2018-01-01
Recent reviews have reinforced the notion that having a supportive spouse can help with the process of coping with and adjusting to cancer. Congruence between spouses' perspectives has been proposed as one mechanism in that process, yet alternative models of congruence have not been examined closely. This study assessed alternative models of congruence in perceptions of coping and their mediating effects on adjustment to breast cancer. Seventy-two women in treatment for breast cancer and their husbands completed measures of marital adjustment, self-efficacy for coping, and adjustment to cancer. Karnofsky Performance Status was obtained from medical records. Wives completed a measure of self-efficacy for coping (wives' ratings of self-efficacy for coping [WSEC]) and husbands completed a measure of self-efficacy for coping (husbands' ratings of wives' self-efficacy for coping [HSEC]) based on their perceptions of their wives' coping efficacy. Interestingly, the correlation between WSEC and HSEC was only 0.207; thus, they are relatively independent perspectives. The following three models were tested to determine the nature of the relationship between WSEC and HSEC: discrepancy model (WSEC - HSEC), additive model (WSEC + HSEC), and multiplicative model (WSEC × HSEC). The discrepancy model was not related to wives' adjustment; however, the additive ( B =0.205, P <0.001) and multiplicative ( B =0.001, P <0.001) models were significantly related to wives' adjustment. Also, the additive model mediated the relationship between performance status and adjustment. Husbands' perception of their wives' coping efficacy contributed marginally to their wives' adjustment, and the combination of WSEC and HSEC mediated the relationship between functional status and wives' adjustment, thus positively impacting wives' adjustment to cancer. Future research is needed to determine the quality of the differences between HSEC and WSEC in order to develop interventions to optimize the impact of these two relatively independent perspectives on cancer outcomes.
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
Evaluating diagnosis-based risk-adjustment methods in a population with spinal cord dysfunction.
Warner, Grace; Hoenig, Helen; Montez, Maria; Wang, Fei; Rosen, Amy
2004-02-01
To examine performance of models in predicting health care utilization for individuals with spinal cord dysfunction. Regression models compared 2 diagnosis-based risk-adjustment methods, the adjusted clinical groups (ACGs) and diagnostic cost groups (DCGs). To improve prediction, we added to our model: (1) spinal cord dysfunction-specific diagnostic information, (2) limitations in self-care function, and (3) both 1 and 2. Models were replicated in 3 populations. Samples from 3 populations: (1) 40% of veterans using Veterans Health Administration services in fiscal year 1997 (FY97) (N=1,046,803), (2) veteran sample with spinal cord dysfunction identified by codes from the International Statistical Classification of Diseases, 9th Revision, Clinical Modifications (N=7666), and (3) veteran sample identified in Veterans Affairs Spinal Cord Dysfunction Registry (N=5888). Not applicable. Inpatient, outpatient, and total days of care in FY97. The DCG models (R(2) range,.22-.38) performed better than ACG models (R(2) range,.04-.34) for all outcomes. Spinal cord dysfunction-specific diagnostic information improved prediction more in the ACG model than in the DCG model (R(2) range for ACG,.14-.34; R(2) range for DCG,.24-.38). Information on self-care function slightly improved performance (R(2) range increased from 0 to.04). The DCG risk-adjustment models predicted health care utilization better than ACG models. ACG model prediction was improved by adding information.
The lack of theoretical support for using person trade-offs in QALY-type models.
Østerdal, Lars Peter
2009-10-01
Considerable support for the use of person trade-off methods to assess the quality-adjustment factor in quality-adjusted life years (QALY) models has been expressed in the literature. The WHO has occasionally used similar methods to assess the disability weights for calculation of disability-adjusted life years (DALYs). This paper discusses the theoretical support for the use of person trade-offs in QALY-type measurement of (changes in) population health. It argues that measures of this type based on such quality-adjustment factors almost always violate the Pareto principle, and so lack normative justification.
Milly, Paul C.D.; Dunne, Krista A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.
ERIC Educational Resources Information Center
Hoglund, Wendy L. G.; Jones, Stephanie M.; Brown, Joshua L.; Aber, J. Lawrence
2015-01-01
The current study examines 3 alternative conceptual models of the directional associations between parent involvement in schooling (homework assistance, home-school conferencing, school-based support) and child adjustment (academic and social competence, aggressive behaviors). The parent socialization model tests the hypothesis that parent…
A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment
ERIC Educational Resources Information Center
Lamborn, Susie D.; Groh, Kelly
2009-01-01
We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…
When methods meet politics: how risk adjustment became part of Medicare managed care.
Weissman, Joel S; Wachterman, Melissa; Blumenthal, David
2005-06-01
Health-based risk adjustment has long been touted as key to the success of competitive models of health care. Because it decreases the incentive to enroll only healthy patients in insurance plans, risk adjustment was incorporated into Medicare policy via the Balanced Budget Act of 1997. However, full implementation of risk adjustment was delayed due to clashes with the managed care industry over payment policy, concerns over perverse incentives, and problems of data burden. We review the history of risk adjustment leading up to the Balanced Budget Act and examine the controversies surrounding attempts to stop or delay its implementation during the years that followed. The article provides lessons for the future of health-based risk adjustment and possible alternatives.
Schilling, Peter L; Bozic, Kevin J
2016-01-06
Comparing outcomes across providers requires risk-adjustment models that account for differences in case mix. The burden of data collection from the clinical record can make risk-adjusted outcomes difficult to measure. The purpose of this study was to develop risk-adjustment models for hip fracture repair (HFR), total hip arthroplasty (THA), and total knee arthroplasty (TKA) that weigh adequacy of risk adjustment against data-collection burden. We used data from the American College of Surgeons National Surgical Quality Improvement Program to create derivation cohorts for HFR (n = 7000), THA (n = 17,336), and TKA (n = 28,661). We developed logistic regression models for each procedure using age, sex, American Society of Anesthesiologists (ASA) physical status classification, comorbidities, laboratory values, and vital signs-based comorbidities as covariates, and validated the models with use of data from 2012. The derivation models' C-statistics for mortality were 80%, 81%, 75%, and 92% and for adverse events were 68%, 68%, 60%, and 70% for HFR, THA, TKA, and combined procedure cohorts. Age, sex, and ASA classification accounted for a large share of the explained variation in mortality (50%, 58%, 70%, and 67%) and adverse events (43%, 45%, 46%, and 68%). For THA and TKA, these three variables were nearly as predictive as models utilizing all covariates. HFR model discrimination improved with the addition of comorbidities and laboratory values; among the important covariates were functional status, low albumin, high creatinine, disseminated cancer, dyspnea, and body mass index. Model performance was similar in validation cohorts. Risk-adjustment models using data from health records demonstrated good discrimination and calibration for HFR, THA, and TKA. It is possible to provide adequate risk adjustment using only the most predictive variables commonly available within the clinical record. This finding helps to inform the trade-off between model performance and data-collection burden as well as the need to define priorities for data capture from electronic health records. These models can be used to make fair comparisons of outcome measures intended to characterize provider quality of care for value-based-purchasing and registry initiatives. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel
Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao
2016-01-01
With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified. PMID:27706076
A case-mix classification system for explaining healthcare costs using administrative data in Italy.
Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico
2018-03-04
The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.
Terluin, Berend; Eekhout, Iris; Terwee, Caroline B
2017-03-01
Patients have their individual minimal important changes (iMICs) as their personal benchmarks to determine whether a perceived health-related quality of life (HRQOL) change constitutes a (minimally) important change for them. We denote the mean iMIC in a group of patients as the "genuine MIC" (gMIC). The aims of this paper are (1) to examine the relationship between the gMIC and the anchor-based minimal important change (MIC), determined by receiver operating characteristic analysis or by predictive modeling; (2) to examine the impact of the proportion of improved patients on these MICs; and (3) to explore the possibility to adjust the MIC for the influence of the proportion of improved patients. Multiple simulations of patient samples involved in anchor-based MIC studies with different characteristics of HRQOL (change) scores and distributions of iMICs. In addition, a real data set is analyzed for illustration. The receiver operating characteristic-based and predictive modeling MICs equal the gMIC when the proportion of improved patients equals 0.5. The MIC is estimated higher than the gMIC when the proportion improved is greater than 0.5, and the MIC is estimated lower than the gMIC when the proportion improved is less than 0.5. Using an equation including the predictive modeling MIC, the log-odds of improvement, the standard deviation of the HRQOL change score, and the correlation between the HRQOL change score and the anchor results in an adjusted MIC reflecting the gMIC irrespective of the proportion of improved patients. Adjusting the predictive modeling MIC for the proportion of improved patients assures that the adjusted MIC reflects the gMIC. We assumed normal distributions and global perceived change scores that were independent on the follow-up score. Additionally, floor and ceiling effects were not taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.
Sun, Shanxia; Delgado, Michael S; Sesmero, Juan P
2016-07-15
Input- and output-based economic policies designed to reduce water pollution from fertilizer runoff by adjusting management practices are theoretically justified and well-understood. Yet, in practice, adjustment in fertilizer application or land allocation may be sluggish. We provide practical guidance for policymakers regarding the relative magnitude and speed of adjustment of input- and output-based policies. Through a dynamic dual model of corn production that takes fertilizer as one of several production inputs, we measure the short- and long-term effects of policies that affect the relative prices of inputs and outputs through the short- and long-term price elasticities of fertilizer application, and also the total time required for different policies to affect fertilizer application through the adjustment rates of capital and land. These estimates allow us to compare input- and output-based policies based on their relative cost-effectiveness. Using data from Indiana and Illinois, we find that input-based policies are more cost-effective than their output-based counterparts in achieving a target reduction in fertilizer application. We show that input- and output-based policies yield adjustment in fertilizer application at the same speed, and that most of the adjustment takes place in the short-term. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems
Juhnke, Christin; Bethge, Susanne
2016-01-01
Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544
Lai, Zhi-Hui; Leng, Yong-Gang
2015-01-01
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671
NASA Astrophysics Data System (ADS)
Jamróz, Weronika
2016-06-01
The paper shows the way enrgy-based models aproximate mechanical properties of hiperelastic materials. Main goal of research was to create a method of finding a set of material constants that are included in a strain energy function that constitutes a heart of an energy-based model. The most optimal set of material constants determines the best adjustment of a theoretical stress-strain relation to the experimental one. This kind of adjustment enables better prediction of behaviour of a chosen material. In order to obtain more precised solution the approximation was made with use of data obtained in a modern experiment widely describen in [1]. To save computation time main algorithm is based on genetic algorithms.
Controlled cooling of an electronic system based on projected conditions
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
2016-05-17
Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.
Controlled cooling of an electronic system based on projected conditions
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
2015-08-18
Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.
Milly, P.C.D.; Dunne, K.A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Medeiros, Stephen; Hagen, Scott; Weishampel, John; ...
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
Friedrich, D T; Sommer, F; Scheithauer, M O; Greve, J; Hoffmann, T K; Schuler, P J
2017-12-01
Objective Advanced transnasal sinus and skull base surgery remains a challenging discipline for head and neck surgeons. Restricted access and space for instrumentation can impede advanced interventions. Thus, we present the combination of an innovative robotic endoscope guidance system and a specific endoscope with adjustable viewing angle to facilitate transnasal surgery in a human cadaver model. Materials and Methods The applicability of the robotic endoscope guidance system with custom foot pedal controller was tested for advanced transnasal surgery on a fresh frozen human cadaver head. Visualization was enabled using a commercially available endoscope with adjustable viewing angle (15-90 degrees). Results Visualization and instrumentation of all paranasal sinuses, including the anterior and middle skull base, were feasible with the presented setup. Controlling the robotic endoscope guidance system was effectively precise, and the adjustable endoscope lens extended the view in the surgical field without the common change of fixed viewing angle endoscopes. Conclusion The combination of a robotic endoscope guidance system and an advanced endoscope with adjustable viewing angle enables bimanual surgery in transnasal interventions of the paranasal sinuses and the anterior skull base in a human cadaver model. The adjustable lens allows for the abandonment of fixed-angle endoscopes, saving time and resources, without reducing the quality of imaging.
Radiometric Block Adjusment and Digital Radiometric Model Generation
NASA Astrophysics Data System (ADS)
Pros, A.; Colomina, I.; Navarro, J. A.; Antequera, R.; Andrinal, P.
2013-05-01
In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM) as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF). In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART) models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
Clearing margin system in the futures markets—Applying the value-at-risk model to Taiwanese data
NASA Astrophysics Data System (ADS)
Chiu, Chien-Liang; Chiang, Shu-Mei; Hung, Jui-Cheng; Chen, Yu-Lung
2006-07-01
This article sets out to investigate if the TAIFEX has adequate clearing margin adjustment system via unconditional coverage, conditional coverage test and mean relative scaled bias to assess the performance of three value-at-risk (VaR) models (i.e., the TAIFEX, RiskMetrics and GARCH-t). For the same model, original and absolute returns are compared to explore which can accurately capture the true risk. For the same return, daily and tiered adjustment methods are examined to evaluate which corresponds to risk best. The results indicate that the clearing margin adjustment of the TAIFEX cannot reflect true risks. The adjustment rules, including the use of absolute return and tiered adjustment of the clearing margin, have distorted VaR-based margin requirements. Besides, the results suggest that the TAIFEX should use original return to compute VaR and daily adjustment system to set clearing margin. This approach would improve the funds operation efficiency and the liquidity of the futures markets.
Su, Fei; Wang, Jiang; Niu, Shuangxia; Li, Huiyan; Deng, Bin; Liu, Chen; Wei, Xile
2018-02-01
The efficacy of deep brain stimulation (DBS) for Parkinson's disease (PD) depends in part on the post-operative programming of stimulation parameters. Closed-loop stimulation is one method to realize the frequent adjustment of stimulation parameters. This paper introduced the nonlinear predictive control method into the online adjustment of DBS amplitude and frequency. This approach was tested in a computational model of basal ganglia-thalamic network. The autoregressive Volterra model was used to identify the process model based on physiological data. Simulation results illustrated the efficiency of closed-loop stimulation methods (amplitude adjustment and frequency adjustment) in improving the relay reliability of thalamic neurons compared with the PD state. Besides, compared with the 130Hz constant DBS the closed-loop stimulation methods can significantly reduce the energy consumption. Through the analysis of inter-spike-intervals (ISIs) distribution of basal ganglia neurons, the evoked network activity by the closed-loop frequency adjustment stimulation was closer to the normal state. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parental Perceptions of Family Adjustment in Childhood Developmental Disabilities
ERIC Educational Resources Information Center
Thompson, Sandra; Hiebert-Murphy, Diane; Trute, Barry
2013-01-01
Based on the adjustment phase of the double ABC-X model of family stress (McCubbin and Patterson, 1983) this study examined the impact of parenting stress, positive appraisal of the impact of child disability on the family, and parental self-esteem on parental perceptions of family adjustment in families of children with disabilities. For mothers,…
Glaser, Robert; Venus, Joachim
2017-04-01
The data presented in this article are related to the research article entitled "Model-based characterization of growth performance and l-lactic acid production with high optical purity by thermophilic Bacillus coagulans in a lignin-supplemented mixed substrate medium (R. Glaser and J. Venus, 2016) [1]". This data survey provides the information on characterization of three Bacillus coagulans strains. Information on cofermentation of lignocellulose-related sugars in lignin-containing media is given. Basic characterization data are supported by optical-density high-throughput screening and parameter adjustment to logistic growth models. Lab scale fermentation procedures are examined by model adjustment of a Monod kinetics-based growth model. Lignin consumption is analyzed using the data on decolorization of a lignin-supplemented minimal medium.
Geographic Model and Biomarker-Derived Measures of Pesticide Exposure and Parkinson’s Disease
RITZ, BEATE; COSTELLO, SADIE
2013-01-01
For more than two decades, reports have suggested that pesticides and herbicides may be an etiologic factor in idiopathic Parkinson’s disease (PD). To date, no clear associations with any specific pesticide have been demonstrated from epidemiological studies perhaps, in part, because methods of reliably estimating exposures are lacking. We tested the validity of a Geographic Information Systems (GIS)-based exposure assessment model that estimates potential environmental exposures at residences from pesticide applications to agricultural crops based on California Pesticide Use Reports (PUR). Using lipid-adjusted dichlorodiphenyldichloroethylene (DDE) serum levels as the “gold standard” for pesticide exposure, we conducted a validation study in a sample taken from an ongoing, population-based case–control study of PD in Central California. Residential, occupational, and other risk factor data were collected for 22 cases and 24 controls from Kern county, California. Environmental GIS–PUR-based organochlorine (OC) estimates were derived for each subject and compared to lipid-adjusted DDE serum levels. Relying on a linear regression model, we predicted log-transformed lipid-adjusted DDE serum levels. GIS–PUR-derived OC measure, body mass index, age, gender, mixing and loading pesticides by hand, and using pesticides in the home, together explained 47% of the DDE serum level variance (adjusted r2 = 0.47). The specificity of using our environmental GIS–PUR-derived OC measures to identify those with high-serum DDE levels was reasonably good (87%). Our environmental GIS–PUR-based approach appears to provide a valid model for assessing residential exposures to agricultural pesticides. PMID:17119217
Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin
Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.
2011-01-01
In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.
Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin
Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.
2011-01-01
In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Virginia M., E-mail: vweaver@jhsph.edu; Johns Hopkins University School of Medicine, Baltimore, MD; Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD
Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, bloodmore » lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m{sup 2}; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. - Highlights: • Positive associations between urine metals and creatinine-based eGFR are unexpected. • Optimal approach to urine concentration adjustment for urine biomarkers uncertain. • We compared urine concentration adjustment methods. • Positive associations observed only with urine creatinine adjustment. • Additional research using non-creatinine-based methods of adjustment needed.« less
Antioch, Kathryn M; Walsh, Michael K
2004-06-01
Hospitals throughout the world using funding based on diagnosis-related groups (DRG) have incurred substantial budgetary deficits, despite high efficiency. We identify the limitations of DRG funding that lack risk (severity) adjustment for State-wide referral services. Methods to risk adjust DRGs are instructive. The average price in casemix funding in the Australian State of Victoria is policy based, not benchmarked. Average cost weights are too low for high-complexity DRGs relating to State-wide referral services such as heart and lung transplantation and trauma. Risk-adjusted specified grants (RASG) are required for five high-complexity respiratory, cardiology and stroke DRGs incurring annual deficits of $3.6 million due to high casemix complexity and government under-funding despite high efficiency. Five stepwise linear regressions for each DRG excluded non-significant variables and assessed heteroskedasticity and multicollinearlity. Cost per patient was the dependent variable. Significant independent variables were age, length-of-stay outliers, number of disease types, diagnoses, procedures and emergency status. Diagnosis and procedure severity markers were identified. The methodology and the work of the State-wide Risk Adjustment Working Group can facilitate risk adjustment of DRGs State-wide and for Treasury negotiations for expenditure growth. The Alfred Hospital previously negotiated RASG of $14 million over 5 years for three trauma and chronic DRGs. Some chronic diseases require risk-adjusted capitation funding models for Australian Health Maintenance Organizations as an alternative to casemix funding. The use of Diagnostic Cost Groups can facilitate State and Federal government reform via new population-based risk adjusted funding models that measure health need.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
Linden, Ariel
2017-08-01
When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Rackl, Robert; Weston, Adam
2005-01-01
The literature on turbulent boundary layer pressure fluctuations provides several empirical models which were compared to the measured TU-144 data. The Efimtsov model showed the best agreement. Adjustments were made to improve its agreement further, consisting of the addition of a broad band peak in the mid frequencies, and a minor modification to the high frequency rolloff. The adjusted Efimtsov predicted and measured results are compared for both subsonic and supersonic flight conditions. Measurements in the forward and middle portions of the fuselage have better agreement with the model than those from the aft portion. For High Speed Civil Transport supersonic cruise, interior levels predicted by use of this model are expected to increase by 1-3 dB due to the adjustments to the Efimtsov model. The space-time cross-correlations and cross-spectra of the fluctuating surface pressure were also investigated. This analysis is an important ingredient in structural acoustic models of aircraft interior noise. Once again the measured data were compared to the predicted levels from the Efimtsov model.
Jackson, Sarah S; Leekha, Surbhi; Magder, Laurence S; Pineles, Lisa; Anderson, Deverick J; Trick, William E; Woeltje, Keith F; Kaye, Keith S; Stafford, Kristen; Thom, Kerri; Lowe, Timothy J; Harris, Anthony D
2017-09-01
BACKGROUND Risk adjustment is needed to fairly compare central-line-associated bloodstream infection (CLABSI) rates between hospitals. Until 2017, the Centers for Disease Control and Prevention (CDC) methodology adjusted CLABSI rates only by type of intensive care unit (ICU). The 2017 CDC models also adjust for hospital size and medical school affiliation. We hypothesized that risk adjustment would be improved by including patient demographics and comorbidities from electronically available hospital discharge codes. METHODS Using a cohort design across 22 hospitals, we analyzed data from ICU patients admitted between January 2012 and December 2013. Demographics and International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM) discharge codes were obtained for each patient, and CLABSIs were identified by trained infection preventionists. Models adjusting only for ICU type and for ICU type plus patient case mix were built and compared using discrimination and standardized infection ratio (SIR). Hospitals were ranked by SIR for each model to examine and compare the changes in rank. RESULTS Overall, 85,849 ICU patients were analyzed and 162 (0.2%) developed CLABSI. The significant variables added to the ICU model were coagulopathy, paralysis, renal failure, malnutrition, and age. The C statistics were 0.55 (95% CI, 0.51-0.59) for the ICU-type model and 0.64 (95% CI, 0.60-0.69) for the ICU-type plus patient case-mix model. When the hospitals were ranked by adjusted SIRs, 10 hospitals (45%) changed rank when comorbidity was added to the ICU-type model. CONCLUSIONS Our risk-adjustment model for CLABSI using electronically available comorbidities demonstrated better discrimination than did the CDC model. The CDC should strongly consider comorbidity-based risk adjustment to more accurately compare CLABSI rates across hospitals. Infect Control Hosp Epidemiol 2017;38:1019-1024.
A concordance-based study to assess doctors’ and nurses’ mental models in Internal Medicine
Chan, K. C. Gary; Muller-Juge, Virginie; Cullati, Stéphane; Hudelson, Patricia; Maître, Fabienne; Vu, Nu V.; Savoldelli, Georges L.; Nendaz, Mathieu R.
2017-01-01
Interprofessional collaboration between doctors and nurses is based on team mental models, in particular for each professional’s roles. Our objective was to identify factors influencing concordance on the expectations of doctors’ and nurses’ roles and responsibilities in an Internal Medicine ward. Using a dataset of 196 doctor-nurse pairs (14x14 = 196), we analyzed choices and prioritized management actions of 14 doctors and 14 nurses in six clinical nurse role scenarios, and in five doctor role scenarios (6 options per scenario). In logistic regression models with a non-nested correlation structure, we evaluated concordance among doctors and nurses, and adjusted for potential confounders (including prior experience in Internal Medicine, acuteness of case and gender). Concordance was associated with number of female professionals (adjusted OR 1.32, 95% CI 1.02 to 1.73), for acute situations (adjusted OR 2.02, 95% CI 1.13 to 3.62), and in doctor role scenarios (adjusted OR 2.19, 95% CI 1.32 to 3.65). Prior experience and country of training were not significant predictors of concordance. In conclusion, our concordance-based approach helped us identify areas of lower concordance in expected doctor-nurse roles and responsibilities, particularly in non-acute situations, which can be targeted by future interprofessional, educational interventions. PMID:28792524
A biologically-based dose response (BBDR) model for the hypothalamic-pituitary thyroid (BPT) axis in the lactating rat and nursing pup was developed to describe the perturbations caused by iodide deficiency on the HPT axis. Model calibrations, carried out by adjusting key model p...
A biologically-based dose response (BBDR) model for the hypothalamic-pituitary thyroid (HPT) axis in the lactating rat and nursing pup was developed to describe the perturbations caused by iodide deficiency on the 1-IPT axis. Model calibrations, carried out by adjusting key model...
Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O.
2018-01-01
Background Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. Methods We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010–2015 was analyzed. Results The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. Conclusions The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality. PMID:29558486
Schwarzkopf, Daniel; Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O
2018-01-01
Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010-2015 was analyzed. The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality.
Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China
NASA Astrophysics Data System (ADS)
Yang, Bo
2016-06-01
Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.
Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed
2013-01-01
Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland completed measures of community discord, family relations, and children’s regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members’ reports of current sectarian and non-sectarian antisocial behavior. Interparental conflict and parental monitoring and children’s emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children’s adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world. PMID:20423550
Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed
2010-05-01
Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.
NASA Astrophysics Data System (ADS)
Helman, David; Lensky, Itamar M.; Osem, Yagil; Rohatyn, Shani; Rotenberg, Eyal; Yakir, Dan
2017-09-01
Estimations of ecosystem-level evapotranspiration (ET) and CO2 uptake in water-limited environments are scarce and scaling up ground-level measurements is not straightforward. A biophysical approach using remote sensing (RS) and meteorological data (RS-Met) is adjusted to extreme high-energy water-limited Mediterranean ecosystems that suffer from continuous stress conditions to provide daily estimations of ET and CO2 uptake (measured as gross primary production, GPP) at a spatial resolution of 250 m. The RS-Met was adjusted using a seasonal water deficit factor (fWD) based on daily rainfall, temperature and radiation data. We validated our adjusted RS-Met with eddy covariance flux measurements using a newly developed mobile lab system and the single active FLUXNET station operating in this region (Yatir pine forest station) at a total of seven forest and non-forest sites across a climatic transect in Israel (280-770 mm yr-1). RS-Met was also compared to the satellite-borne MODIS-based ET and GPP products (MOD16 and MOD17, respectively) at these sites.Results show that the inclusion of the fWD significantly improved the model, with R = 0.64-0.91 for the ET-adjusted model (compared to 0.05-0.80 for the unadjusted model) and R = 0.72-0.92 for the adjusted GPP model (compared to R = 0.56-0.90 of the non-adjusted model). The RS-Met (with the fWD) successfully tracked observed changes in ET and GPP between dry and wet seasons across the sites. ET and GPP estimates from the adjusted RS-Met also agreed well with eddy covariance estimates on an annual timescale at the FLUXNET station of Yatir (266 ± 61 vs. 257 ± 58 mm yr-1 and 765 ± 112 vs. 748 ± 124 gC m-2 yr-1 for ET and GPP, respectively). Comparison with MODIS products showed consistently lower estimates from the MODIS-based models, particularly at the forest sites. Using the adjusted RS-Met, we show that afforestation significantly increased the water use efficiency (the ratio of carbon uptake to ET) in this region, with the positive effect decreasing when moving from dry to more humid environments, strengthening the importance of drylands afforestation. This simple yet robust biophysical approach shows promise for reliable ecosystem-level estimations of ET and CO2 uptake in extreme high-energy water-limited environments.
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Eltahir, Elfatih A. B.
2011-02-01
This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.
Predicting cost of care using self-reported health status data.
Boscardin, Christy K; Gonzales, Ralph; Bradley, Kent L; Raven, Maria C
2015-09-23
We examined whether self-reported employee health status data can improve the performance of administrative data-based models for predicting future high health costs, and develop a predictive model for predicting new high cost individuals. This retrospective cohort study used data from 8,917 Safeway employees self-insured by Safeway during 2008 and 2009. We created models using step-wise multivariable logistic regression starting with health services use data, then socio-demographic data, and finally adding the self-reported health status data to the model. Adding self-reported health data to the baseline model that included only administrative data (health services use and demographic variables; c-statistic = 0.63) increased the model" predictive power (c-statistic = 0.70). Risk factors associated with being a new high cost individual in 2009 were: 1) had one or more ED visits in 2008 (adjusted OR: 1.87, 95 % CI: 1.52, 2.30), 2) had one or more hospitalizations in 2008 (adjusted OR: 1.95, 95 % CI: 1.38, 2.77), 3) being female (adjusted OR: 1.34, 95 % CI: 1.16, 1.55), 4) increasing age (compared with age 18-35, adjusted OR for 36-49 years: 1.28; 95 % CI: 1.03, 1.60; adjusted OR for 50-64 years: 1.92, 95 % CI: 1.55, 2.39; adjusted OR for 65+ years: 3.75, 95 % CI: 2.67, 2.23), 5) the presence of self-reported depression (adjusted OR: 1.53, 95 % CI: 1.29, 1.81), 6) chronic pain (adjusted OR: 2.22, 95 % CI: 1.81, 2.72), 7) diabetes (adjusted OR: 1.73, 95 % CI: 1.35, 2.23), 8) high blood pressure (adjusted OR: 1.42, 95 % CI: 1.21, 1.67), and 9) above average BMI (adjusted OR: 1.20, 95 % CI: 1.04, 1.38). The comparison of the models between the full sample and the sample without theprevious high cost members indicated significant differences in the predictors. This has importantimplications for models using only the health service use (administrative data) given that the past high costis significantly correlated with future high cost and often drive the predictive models. Self-reported health data improved the ability of our model to identify individuals at risk for being high cost beyond what was possible with administrative data alone.
ERIC Educational Resources Information Center
Kalkan, Melek; Ersanli, Ercumend
2008-01-01
The aim of this study is to investigate the effects of the marriage enrichment program based on the cognitive-behavioral approach on levels of marital adjustment of individuals. The experimental and control group of this research was totally composed of 30 individuals. A pre-test post-test research model with control group was used in this…
Modelling the impact of new patient visits on risk adjusted access at 2 clinics.
Kolber, Michael A; Rueda, Germán; Sory, John B
2018-06-01
To evaluate the effect new outpatient clinic visits has on the availability of follow-up visits for established patients when patient visit frequency is risk adjusted. Diagnosis codes for patients from 2 Internal Medicine Clinics were extracted through billing data. The HHS-HCC risk adjusted scores for each clinic were determined based upon the average of all clinic practitioners' profiles. These scores were then used to project encounter frequencies for established patients, and for new patients entering the clinic based on risk and time of entry into the clinics. A distinct mean risk frequency distribution for physicians in each clinic could be defined providing model parameters. Within the model, follow-up visit utilization at the highest risk adjusted visit frequencies would require more follow-up slots than currently available when new patient no-show rates and annual patient loss are included. Patients seen at an intermediate or lower visit risk adjusted frequency could be accommodated when new patient no-show rates and annual patient clinic loss are considered. Value-based care is driven by control of cost while maintaining quality of care. In order to control cost, there has been a drive to increase visit frequency in primary care for those patients at increased risk. Adding new patients to primary care clinics limits the availability of follow-up slots that accrue over time for those at highest risk, thereby limiting disease and, potentially, cost control. If frequency of established care visits can be reduced by improved disease control, closing the practice to new patients, hiring health care extenders, or providing non-face to face care models then quality and cost of care may be improved. © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long
2016-01-01
The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.
USDA-ARS?s Scientific Manuscript database
A comprehensive stream bank erosion model based on excess shear stress has been developed and incorporated in the hydrological model Soil and Water Assessment Tool (SWAT). It takes into account processes such as weathering, vegetative cover, and channel meanders to adjust critical and effective str...
Gustafson, Paul; Gilbert, Mark; Xia, Michelle; Michelow, Warren; Robert, Wayne; Trussler, Terry; McGuire, Marissa; Paquette, Dana; Moore, David M; Gustafson, Reka
2013-05-15
Venue sampling is a common sampling method for populations of men who have sex with men (MSM); however, men who visit venues frequently are more likely to be recruited. While statistical adjustment methods are recommended, these have received scant attention in the literature. We developed a novel approach to adjust for frequency of venue attendance (FVA) and assess the impact of associated bias in the ManCount Study, a venue-based survey of MSM conducted in Vancouver, British Columbia, Canada, in 2008-2009 to measure the prevalence of human immunodeficiency virus and other infections and associated behaviors. Sampling weights were determined from an abbreviated list of questions on venue attendance and were used to adjust estimates of prevalence for health and behavioral indicators using a Bayesian, model-based approach. We found little effect of FVA adjustment on biological or sexual behavior indicators (primary outcomes); however, adjustment for FVA did result in differences in the prevalence of demographic indicators, testing behaviors, and a small number of additional variables. While these findings are reassuring and lend credence to unadjusted prevalence estimates from this venue-based survey, adjustment for FVA did shed important insights on MSM subpopulations that were not well represented in the sample.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Selmer, Randi; Haglund, Bengt; Furu, Kari; Andersen, Morten; Nørgaard, Mette; Zoëga, Helga; Kieler, Helle
2016-10-01
Compare analyses of a pooled data set on the individual level with aggregate meta-analysis in a multi-database study. We reanalysed data on 2.3 million births in a Nordic register based cohort study. We compared estimated odds ratios (OR) for the effect of selective serotonin reuptake inhibitors (SSRI) and venlafaxine use in pregnancy on any cardiovascular birth defect and the rare outcome right ventricular outflow tract obstructions (RVOTO). Common covariates included maternal age, calendar year, birth order, maternal diabetes, and co-medication. Additional covariates were added in analyses with country-optimized adjustment. Country adjusted OR (95%CI) for any cardiovascular birth defect in the individual-based pooled analysis was 1.27 (1.17-1.39), 1.17 (1.07-1.27) adjusted for common covariates and 1.15 (1.05-1.26) adjusted for all covariates. In fixed effects meta-analyses pooled OR was 1.29 (1.19-1.41) based on crude country specific ORs, 1.19 (1.09-1.29) adjusted for common covariates, and 1.16 (1.06-1.27) for country-optimized adjustment. In a random effects model the adjusted OR was 1.07 (0.87-1.32). For RVOTO, OR was 1.48 (1.15-1.89) adjusted for all covariates in the pooled data set, and 1.53 (1.19-1.96) after country-optimized adjustment. Country-specific adjusted analyses at the substance level were not possible for RVOTO. Results of fixed effects meta-analysis and individual-based analyses of a pooled dataset were similar in this study on the association of SSRI/venlafaxine and cardiovascular birth defects. Country-optimized adjustment attenuated the estimates more than adjustment for common covariates only. When data are sparse pooled data on the individual level are needed for adjusted analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu
2017-11-20
Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p < 0.01). This study suggests that adjustment of the price of medical procedures based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.
Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors
NASA Astrophysics Data System (ADS)
Chen, Liangyuan
2018-03-01
The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.
Adjusted hospital death rates: a potential screen for quality of medical care.
Dubois, R W; Brook, R H; Rogers, W H
1987-09-01
Increased economic pressure on hospitals has accelerated the need to develop a screening tool for identifying hospitals that potentially provide poor quality care. Based upon data from 93 hospitals and 205,000 admissions, we used a multiple regression model to adjust the hospitals crude death rate. The adjustment process used age, origin of patient from the emergency department or nursing home, and a hospital case mix index based on DRGs (diagnostic related groups). Before adjustment, hospital death rates ranged from 0.3 to 5.8 per 100 admissions. After adjustment, hospital death ratios ranged from 0.36 to 1.36 per 100 (actual death rate divided by predicted death rate). Eleven hospitals (12 per cent) were identified where the actual death rate exceeded the predicted death rate by more than two standard deviations. In nine hospitals (10 per cent), the predicted death rate exceeded the actual death rate by a similar statistical margin. The 11 hospitals with higher than predicted death rates may provide inadequate quality of care or have uniquely ill patient populations. The adjusted death rate model needs to be validated and generalized before it can be used routinely to screen hospitals. However, the remaining large differences in observed versus predicted death rates lead us to believe that important differences in hospital performance may exist.
Health-based risk adjustment: is inpatient and outpatient diagnostic information sufficient?
Lamers, L M
Adequate risk adjustment is critical to the success of market-oriented health care reforms in many countries. Currently used risk adjusters based on demographic and diagnostic cost groups (DCGs) do not reflect expected costs accurately. This study examines the simultaneous predictive accuracy of inpatient and outpatient morbidity measures and prior costs. DCGs, pharmacy cost groups (PCGs), and prior year's costs improve the predictive accuracy of the demographic model substantially. DCGs and PCGs seem complementary in their ability to predict future costs. However, this study shows that the combination of DCGs and PCGs still leaves room for cream skimming.
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...
2012-05-29
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
Towards an Integrated Model of the NIC Layered Implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O S; Callahan, D A; Cerjan, C J
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-45% of the calculated yields.« less
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Sturge-Apple, Melissa L; Davies, Patrick T; Cicchetti, Dante; Fittoria, Michael G
2014-11-01
The present study incorporates a person-based approach to identify spillover and compartmentalization patterns of interpartner conflict and maternal parenting practices in an ethnically diverse sample of 192 2-year-old children and their mothers who had experienced higher levels of socioeconomic risk. In addition, we tested whether sociocontextual variables were differentially predictive of theses profiles and examined how interpartner-parenting profiles were associated with children's physiological and psychological adjustment over time. As expected, latent class analyses extracted three primary profiles of functioning: adequate functioning, spillover, and compartmentalizing families. Furthermore, interpartner-parenting profiles were differentially associated with both sociocontextual predictors and children's adjustment trajectories. The findings highlight the developmental utility of incorporating person-based approaches to models of interpartner conflict and maternal parenting practices.
Hawkins, Amy L; Haskett, Mary E
2014-01-01
Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and investigate the mediating role of self-regulation in links between IWM and adjustment. Cluster analysis was used to subgroup 74 physically abused children based on their IWM. Internal working models were identified by children's representations, as measured by a narrative story stem task. Self-regulation was assessed by teacher report and a behavioral task, and adjustment was measured by teacher report. Cluster analyses indicated two subgroups of abused children with distinct patterns of IWMs. Cluster membership predicted internalizing and externalizing problems. Associations between cluster membership and adjustment were mediated by children's regulation, as measured by teacher reports of many aspects of regulation. There was no support for mediation when regulation was measured by a behavioral task that tapped more narrow facets of regulation. Abused children exhibit clinically relevant individual differences in their IWMs; these models are linked to adjustment in the school setting, possibly through children's self-regulation. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.
Detection technology research on the one-way clutch of automatic brake adjuster
NASA Astrophysics Data System (ADS)
Jiang, Wensong; Luo, Zai; Lu, Yi
2013-10-01
In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.
Uranium Associations with Kidney Outcomes Vary by Urine Concentration Adjustment Method
Shelley, Rebecca; Kim, Nam-Soo; Parsons, Patrick J.; Lee, Byung-Kook; Agnew, Jacqueline; Jaar, Bernard G.; Steuerwald, Amy J.; Matanoski, Genevieve; Fadrowski, Jeffrey; Schwartz, Brian S.; Todd, Andrew C.; Simon, David; Weaver, Virginia M.
2017-01-01
Uranium is a ubiquitous metal that is nephrotoxic at high doses. Few epidemiologic studies have examined the kidney filtration impact of chronic environmental exposure. In 684 lead workers environmentally exposed to uranium, multiple linear regression was used to examine associations of uranium measured in a four-hour urine collection with measured creatinine clearance, serum creatinine- and cystatin-C-based estimated glomerular filtration rates, and N-acetyl-β-D-glucosaminidase (NAG). Three methods were utilized, in separate models, to adjust uranium levels for urine concentration - μg uranium/g creatinine; μg uranium/L and urine creatinine as separate covariates; and μg uranium/4 hr. Median urine uranium levels were 0.07 μg/g creatinine and 0.02 μg/4 hr and were highly correlated (rs =0.95). After adjustment, higher ln-urine uranium was associated with lower measured creatinine clearance and higher NAG in models that used urine creatinine to adjust for urine concentration but not in models that used total uranium excreted (μg/4 hr). These results suggest that, in some instances, associations between urine toxicants and kidney outcomes may be statistical, due to the use of urine creatinine in both exposure and outcome metrics, rather than nephrotoxic. These findings support consideration of non-creatinine-based methods of adjustment for urine concentration in nephrotoxicant research. PMID:23591699
Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment
NASA Astrophysics Data System (ADS)
Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.
2016-08-01
This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.
Early Parental Adjustment and Bereavement after Childhood Cancer Death
ERIC Educational Resources Information Center
Barrera, Maru; O'connor, Kathleen; D'Agostino, Norma Mammone; Spencer, Lynlee; Nicholas, David; Jovcevska, Vesna; Tallet, Susan; Schneiderman, Gerald
2009-01-01
This study comprehensively explored parental bereavement and adjustment at 6 months post-loss due to childhood cancer. Interviews were conducted with 18 mothers and 13 fathers. Interviews were transcribed verbatim and analyzed based on qualitative methodology. A model describing early parental bereavement and adaptation emerged with 3 domains:…
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Makhmalbaf, Atefe; Srivastava, Viraj
This paper presents a new technique for and the results of normalizing building energy consumption to enable a fair comparison among various types of buildings located near different weather stations across the U.S. The method was developed for the U.S. Building Energy Asset Score, a whole-building energy efficiency rating system focusing on building envelope, mechanical systems, and lighting systems. The Asset Score is calculated based on simulated energy use under standard operating conditions. Existing weather normalization methods such as those based on heating and cooling degrees days are not robust enough to adjust all climatic factors such as humidity andmore » solar radiation. In this work, over 1000 sets of climate coefficients were developed to separately adjust building heating, cooling, and fan energy use at each weather station in the United States. This paper also presents a robust, standardized weather station mapping based on climate similarity rather than choosing the closest weather station. This proposed simulated-based climate adjustment was validated through testing on several hundreds of thousands of modeled buildings. Results indicated the developed climate coefficients can isolate and adjust for the impacts of local climate for asset rating.« less
Evaluation of a lake whitefish bioenergetics model
Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.
2006-01-01
We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.
Attar-Schwartz, Shalhevet
2015-09-01
Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.
2017-12-01
Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Ding, Feng; Yang, Xianhai; Chen, Guosong; Liu, Jining; Shi, Lili; Chen, Jingwen
2017-10-01
The partition coefficients between bovine serum albumin (BSA) and water (K BSA/w ) for ionogenic organic chemicals (IOCs) were different greatly from those of neutral organic chemicals (NOCs). For NOCs, several excellent models were developed to predict their logK BSA/w . However, it was found that the conventional descriptors are inappropriate for modeling logK BSA/w of IOCs. Thus, alternative approaches are urgently needed to develop predictive models for K BSA/w of IOCs. In this study, molecular descriptors that can be used to characterize the ionization effects (e.g. chemical form adjusted descriptors) were calculated and used to develop predictive models for logK BSA/w of IOCs. The models developed had high goodness-of-fit, robustness, and predictive ability. The predictor variables selected to construct the models included the chemical form adjusted averages of the negative potentials on the molecular surface (V s-adj - ), the chemical form adjusted molecular dipole moment (dipolemoment adj ), the logarithm of the n-octanol/water distribution coefficient (logD). As these molecular descriptors can be calculated from their molecular structures directly, the developed model can be easily used to fill the logK BSA/w data gap for other IOCs within the applicability domain. Furthermore, the chemical form adjusted descriptors calculated in this study also could be used to construct predictive models on other endpoints of IOCs. Copyright © 2017 Elsevier Inc. All rights reserved.
Martin, Meredith J.; Sturge-Apple, Melissa L.; Davies, Patrick T.; Romero, Christine V.; Buckholz, Abigail
2017-01-01
Drawing on a two-wave, multimethod, multi-informant design, this study provides the first test of a process model of spillover specifying why and how disruptions in the coparenting relationship influence the parent–adolescent attachment relationship. One hundred ninety-four families with an adolescent aged 12–14 (M age = 12.4) were followed for 1 year. Mothers and adolescents participated in two experimental tasks designed to elicit behavioral expressions of parent and adolescent functioning within the attachment relationship. Using a novel observational approach, maternal safe haven, secure base, and harshness (i.e., hostility and control) were compared as potential unique mediators of the association between conflict in the coparenting relationship and adolescent problems. Path models indicated that, although coparenting conflicts were broadly associated with maternal parenting difficulties, only secure base explained the link to adolescent adjustment. Adding further specificity to the process model, maternal secure base support was uniquely associated with adolescent adjustment through deficits in adolescents’ secure exploration. Results support the hypothesis that coparenting disagreements undermine adolescent adjustment in multiple domains specifically by disrupting mothers’ ability to provide a caregiving environment that supports adolescent exploration during a developmental period in which developing autonomy is a crucial stage-salient task. PMID:28401834
Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A
2004-01-01
We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.
Introducing memory and association mechanism into a biologically inspired visual model.
Qiao, Hong; Li, Yinlin; Tang, Tang; Wang, Peng
2014-09-01
A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
The Pathways from Parents' Marital Quality to Adolescents' School Adjustment in South Korea
ERIC Educational Resources Information Center
Jeong, Yu-Jin; Chun, Young-Ju
2010-01-01
This study tested the hypothesized pathways from parents' marital quality to Korean adolescents' school adjustment through the perception of self and parent-child relations. Based on previous literature and two major family theories, the authors hypothesized a path model to explain the process of how parents' marital quality influenced school…
Response Monitoring and Adjustment: Differential Relations with Psychopathic Traits
Bresin, Konrad; Finy, M. Sima; Sprague, Jenessa; Verona, Edelyn
2014-01-01
Studies on the relation between psychopathy and cognitive functioning often show mixed results, partially because different factors of psychopathy have not been considered fully. Based on previous research, we predicted divergent results based on a two-factor model of psychopathy (interpersonal-affective traits and impulsive-antisocial traits). Specifically, we predicted that the unique variance of interpersonal-affective traits would be related to increased monitoring (i.e., error-related negativity) and adjusting to errors (i.e., post-error slowing), whereas impulsive-antisocial traits would be related to reductions in these processes. Three studies using a diverse selection of assessment tools, samples, and methods are presented to identify response monitoring correlates of the two main factors of psychopathy. In Studies 1 (undergraduates), 2 (adolescents), and 3 (offenders), interpersonal-affective traits were related to increased adjustment following errors and, in Study 3, to enhanced monitoring of errors. Impulsive-antisocial traits were not consistently related to error adjustment across the studies, although these traits were related to a deficient monitoring of errors in Study 3. The results may help explain previous mixed findings and advance implications for etiological models of psychopathy. PMID:24933282
NASA Astrophysics Data System (ADS)
Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.
2014-12-01
In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σ<0.5 in log units) in comparison to other regional models, at shorter periods brands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.
An analysis of security price risk and return among publicly traded pharmacy corporations.
Gilligan, Adrienne M; Skrepnek, Grant H
2013-01-01
Community pharmacies have been subject to intense and increasing competition in the past several decades. To determine the security price risk and rate of return of publicly traded pharmacy corporations present on the major U.S. stock exchanges from 1930 to 2009. The Center of Research in Security Prices (CRSP) database was used to examine monthly security-level stock market prices in this observational retrospective study. The primary outcome of interest was the equity risk premium, with analyses focusing upon financial metrics associated with risk and return based upon modern portfolio theory (MPT) including: abnormal returns (i.e., alpha), volatility (i.e., beta), and percentage of returns explained (i.e., adjusted R(2)). Three equilibrium models were estimated using random-effects generalized least squares (GLS): 1) the Capital Asset Pricing Model (CAPM); 2) Fama-French Three-Factor Model; and 3) Carhart Four-Factor Model. Seventy-five companies were examined from 1930 to 2009, with overall adjusted R(2) values ranging from 0.13 with the CAPM to 0.16 with the Four-Factor model. Alpha was not significant within any of the equilibrium models across the entire 80-year time period, though was found from 1999 to 2009 in the Three- and Four-Factor models to be associated with a large, significant, and negative risk-adjusted abnormal returns of -33.84%. Volatility varied across specific time periods based upon the financial model employed. This investigation of risk and return within publicly listed pharmacy corporations from 1930 to 2009 found that substantial losses were incurred particularly from 1999 to 2009, with risk-adjusted security valuations decreasing by one-third. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason
During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less
Stukel, Thérèse A.; Fisher, Elliott S; Wennberg, David E.; Alter, David A.; Gottlieb, Daniel J.; Vermeulen, Marian J.
2007-01-01
Context Comparisons of outcomes between patients treated and untreated in observational studies may be biased due to differences in patient prognosis between groups, often because of unobserved treatment selection biases. Objective To compare 4 analytic methods for removing the effects of selection bias in observational studies: multivariable model risk adjustment, propensity score risk adjustment, propensity-based matching, and instrumental variable analysis. Design, Setting, and Patients A national cohort of 122 124 patients who were elderly (aged 65–84 years), receiving Medicare, and hospitalized with acute myocardial infarction (AMI) in 1994–1995, and who were eligible for cardiac catheterization. Baseline chart reviews were taken from the Cooperative Cardiovascular Project and linked to Medicare health administrative data to provide a rich set of prognostic variables. Patients were followed up for 7 years through December 31, 2001, to assess the association between long-term survival and cardiac catheterization within 30 days of hospital admission. Main Outcome Measure Risk-adjusted relative mortality rate using each of the analytic methods. Results Patients who received cardiac catheterization (n=73 238) were younger and had lower AMI severity than those who did not. After adjustment for prognostic factors by using standard statistical risk-adjustment methods, cardiac catheterization was associated with a 50% relative decrease in mortality (for multivariable model risk adjustment: adjusted relative risk [RR], 0.51; 95% confidence interval [CI], 0.50–0.52; for propensity score risk adjustment: adjusted RR, 0.54; 95% CI, 0.53–0.55; and for propensity-based matching: adjusted RR, 0.54; 95% CI, 0.52–0.56). Using regional catheterization rate as an instrument, instrumental variable analysis showed a 16% relative decrease in mortality (adjusted RR, 0.84; 95% CI, 0.79–0.90). The survival benefits of routine invasive care from randomized clinical trials are between 8% and 21 %. Conclusions Estimates of the observational association of cardiac catheterization with long-term AMI mortality are highly sensitive to analytic method. All standard risk-adjustment methods have the same limitations regarding removal of unmeasured treatment selection biases. Compared with standard modeling, instrumental variable analysis may produce less biased estimates of treatment effects, but is more suited to answering policy questions than specific clinical questions. PMID:17227979
The elimination of colour blocks in remote sensing images in VR
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Li, Guohui; Su, Zhenyu
2018-02-01
Aiming at the characteristics in HSI colour space of remote sensing images at different time in VR, a unified colour algorithm is proposed. First the method converted original image from RGB colour space to HSI colour space. Then, based on the invariance of the hue before and after the colour adjustment in the HSI colour space and the brightness translational features of the image after the colour adjustment, establish the linear model which satisfied these characteristics of the image. And then determine the range of the parameters in the model. Finally, according to the established colour adjustment model, the experimental verification is carried out. The experimental results show the proposed model can effectively recover the clear image, and the algorithm is faster. The experimental results show the proposed algorithm can effectively enhance the image clarity and can solve the pigment block problem well.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Mapping and localization for extraterrestrial robotic explorations
NASA Astrophysics Data System (ADS)
Xu, Fengliang
In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)
PACE and the Medicare+Choice risk-adjusted payment model.
Temkin-Greener, H; Meiners, M R; Gruenberg, L
2001-01-01
This paper investigates the impact of the Medicare principal inpatient diagnostic cost group (PIP-DCG) payment model on the Program of All-Inclusive Care for the Elderly (PACE). Currently, more than 6,000 Medicare beneficiaries who are nursing home certifiable receive care from PACE, a program poised for expansion under the Balanced Budget Act of 1997. Overall, our analysis suggests that the application of the PIP-DCG model to the PACE program would reduce Medicare payments to PACE, on average, by 38%. The PIP-DCG payment model bases its risk adjustment on inpatient diagnoses and does not capture adequately the risk of caring for a population with functional impairments.
Case-Mix Adjustment of the Bereaved Family Survey.
Kutney-Lee, Ann; Carpenter, Joan; Smith, Dawn; Thorpe, Joshua; Tudose, Alina; Ersek, Mary
2018-01-01
Surveys of bereaved family members are increasingly being used to evaluate end-of-life (EOL) care and to measure organizational performance in EOL care quality. The Bereaved Family Survey (BFS) is used to monitor EOL care quality and benchmark performance in the Veterans Affairs (VA) health-care system. The objective of this study was to develop a case-mix adjustment model for the BFS and to examine changes in facility-level scores following adjustment, in order to provide fair comparisons across facilities. We conducted a cross-sectional secondary analysis of medical record and survey data from veterans and their family members across 146 VA medical centers. Following adjustment using model-based propensity weighting, the mean change in the BFS-Performance Measure score across facilities was -0.6 with a range of -2.6 to 0.6. Fifty-five (38%) facilities changed within ±0.5 percentage points of their unadjusted score. On average, facilities that benefited most from adjustment cared for patients with greater comorbidity burden and were located in urban areas in the Northwest and Midwestern regions of the country. Case-mix adjustment results in minor changes to facility-level BFS scores but allows for fairer comparisons of EOL care quality. Case-mix adjustment of the BFS positions this National Quality Forum-endorsed measure for use in public reporting and internal quality dashboards for VA leadership and may inform the development and refinement of case-mix adjustment models for other surveys of bereaved family members.
Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua
2015-02-01
Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.
Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.
Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret
2005-01-01
Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.
Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles
Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin
2014-01-01
In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075
Bundle block adjustment of airborne three-line array imagery based on rotation angles.
Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin
2014-05-07
In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.
Ground Motion Prediction Models for Caucasus Region
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
[Morbidity Differences by Health Insurance Status in Old Age].
Hajek, A; Bock, J-O; Saum, K-U; Schöttker, B; Brenner, H; Heider, D; König, H-H
2018-06-01
Morbidity differences between older members of private and statutory health insurance Germany have rarely been examined. Thus, we aimed at determining these differences in old age. This study used data from 2 follow-up waves with a 3-year interval from a population-based prospective cohort study (ESTHER study) in Saarland, Germany. Morbidity was assessed by participants' GPs using a generic instrument (Cumulative Illness Rating Scale for Geriatrics). The between estimator was used which exclusively quantifies inter-individual variation. Adjusting for sex and age, we investigated the association between health insurance and morbidity in the main model. In additional models, we adjusted incrementally for the effect of education, family status and income. Regression models not adjusting for income showed that members of private health insurance had a lower morbidity score than members of statutory health insurance. This effect is considerably lower in models adjusting for income, but remained statistically significant (except for men). Observed differences in morbidity between older members of private and statutory health insurance can partly be explained by income differences. Thus, our findings highlight the role of model specification in determining the relation between morbidity and health insurance. © Georg Thieme Verlag KG Stuttgart · New York.
A Revised Thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM Version 3.4)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Johnson, D. L.; James, B. F.
1996-01-01
This report describes the newly-revised model thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM, Version 3.4). It also provides descriptions of other changes made to the program since publication of the programmer's guide for Mars-GRAM Version 3.34. The original Mars-GRAM model thermosphere was based on the global-mean model of Stewart. The revised thermosphere is based largely on parameterizations derived from output data from the three-dimensional Mars Thermospheric Global Circulation Model (MTGCM). The new thermospheric model includes revised dependence on the 10.7 cm solar flux for the global means of exospheric temperature, temperature of the base of the thermosphere, and scale height for the thermospheric temperature variations, as well as revised dependence on orbital position for global mean height of the base of the thermosphere. Other features of the new thermospheric model are: (1) realistic variations of temperature and density with latitude and time of day, (2) more realistic wind magnitudes, based on improved estimates of horizontal pressure gradients, and (3) allowance for user-input adjustments to the model values for mean exospheric temperature and for height and temperature at the base of the thermosphere. Other new features of Mars-GRAM 3.4 include: (1) allowance for user-input values of climatic adjustment factors for temperature profiles from the surface to 75 km, and (2) a revised method for computing the sub-solar longitude position in the 'ORBIT' subroutine.
Chen, Li-Sheng; Yen, Amy Ming-Fang; Duffy, Stephen W; Tabar, Laszlo; Lin, Wen-Chou; Chen, Hsiu-Hsi
2010-10-01
Population-based routine service screening has gained popularity following an era of randomized controlled trials. The evaluation of these service screening programs is subject to study design, data availability, and the precise data analysis for adjusting bias. We developed a computer-aided system that allows the evaluation of population-based service screening to unify these aspects and facilitate and guide the program assessor to efficiently perform an evaluation. This system underpins two experimental designs: the posttest-only non-equivalent design and the one-group pretest-posttest design and demonstrates the type of data required at both the population and individual levels. Three major analyses were developed that included a cumulative mortality analysis, survival analysis with lead-time adjustment, and self-selection bias adjustment. We used SAS AF software to develop a graphic interface system with a pull-down menu style. We demonstrate the application of this system with data obtained from a Swedish population-based service screen and a population-based randomized controlled trial for the screening of breast, colorectal, and prostate cancer, and one service screening program for cervical cancer with Pap smears. The system provided automated descriptive results based on the various sources of available data and cumulative mortality curves corresponding to the study designs. The comparison of cumulative survival between clinically and screen-detected cases without a lead-time adjustment are also demonstrated. The intention-to-treat and noncompliance analysis with self-selection bias adjustments are also shown to assess the effectiveness of the population-based service screening program. Model validation was composed of a comparison between our adjusted self-selection bias estimates and the empirical results on effectiveness reported in the literature. We demonstrate a computer-aided system allowing the evaluation of population-based service screening programs with an adjustment for self-selection and lead-time bias. This is achieved by providing a tutorial guide from the study design to the data analysis, with bias adjustment. Copyright © 2010 Elsevier Inc. All rights reserved.
The Political Economy of Interlibrary Organizations: Two Case Studies.
ERIC Educational Resources Information Center
Townley, Charles T.
J. Kenneth Benson's political economy model for interlibrary cooperation identifies linkages and describes interactions between the environment, the interlibrary organization, and member libraries. A tentative general model for interlibrary organizations based on the Benson model was developed, and the fit of this adjusted model to the realities…
Hart, John
2011-03-01
This study describes a model for statistically analyzing follow-up numeric-based chiropractic spinal assessments for an individual patient based on his or her own baseline. Ten mastoid fossa temperature differential readings (MFTD) obtained from a chiropractic patient were used in the study. The first eight readings served as baseline and were compared to post-adjustment readings. One of the two post-adjustment MFTD readings fell outside two standard deviations of the baseline mean and therefore theoretically represents improvement according to pattern analysis theory. This study showed how standard deviation analysis may be used to identify future outliers for an individual patient based on his or her own baseline data. Copyright © 2011 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Zimmerman, Tammy M.; Breen, Kevin J.
2012-01-01
Pesticide concentration data for waters from selected carbonate-rock aquifers in agricultural areas of Pennsylvania were collected in 1993–2009 for occurrence and distribution assessments. A set of 30 wells was visited once in 1993–1995 and again in 2008–2009 to assess concentration changes. The data include censored matched pairs (nondetections of a compound in one or both samples of a pair). A potentially improved approach for assessing concentration changes is presented where (i) concentrations are adjusted with models of matrix-spike recovery and (ii) area-wide temporal change is tested by use of the paired Prentice-Wilcoxon (PPW) statistical test. The PPW results for atrazine, simazine, metolachlor, prometon, and an atrazine degradate, deethylatrazine (DEA), are compared using recovery-adjusted and unadjusted concentrations. Results for adjusted compared with unadjusted concentrations in 2008–2009 compared with 1993–1995 were similar for atrazine and simazine (significant decrease; 95% confidence level) and metolachlor (no change) but differed for DEA (adjusted, decrease; unadjusted, increase) and prometon (adjusted, decrease; unadjusted, no change). The PPW results were different on recovery-adjusted compared with unadjusted concentrations. Not accounting for variability in recovery can mask a true change, misidentify a change when no true change exists, or assign a direction opposite of the true change in concentration that resulted from matrix influences on extraction and laboratory method performance. However, matrix-based models of recovery derived from a laboratory performance dataset from multiple studies for national assessment, as used herein, rather than time- and study-specific recoveries may introduce uncertainty in recovery adjustments for individual samples that should be considered in assessing change.
Moore, Lynne; Turgeon, Alexis F; Sirois, Marie-Josée; Murat, Valérie; Lavoie, André
2011-09-01
Trauma center performance evaluations generally include adjustment for injury severity, age, and comorbidity. However, disparities across trauma centers may be due to other differences in source populations that are not accounted for, such as socioeconomic status (SES). We aimed to evaluate whether SES influences trauma center performance evaluations in an inclusive trauma system with universal access to health care. The study was based on data collected between 1999 and 2006 in a Canadian trauma system. Patient SES was quantified using an ecologic index of social and material deprivation. Performance evaluations were based on mortality adjusted using the Trauma Risk Adjustment Model. Agreement between performance results with and without additional adjustment for SES was evaluated with correlation coefficients. The study sample comprised a total of 71,784 patients from 48 trauma centers, including 3,828 deaths within 30 days (4.5%) and 5,549 deaths within 6 months (7.7%). The proportion of patients in the highest quintile of social and material deprivation varied from 3% to 43% and from 11% to 90% across hospitals, respectively. The correlation between performance results with or without adjustment for SES was almost perfect (r = 0.997; 95% CI 0.995-0.998) and the same hospital outliers were identified. We observed an important variation in SES across trauma centers but no change in risk-adjusted mortality estimates when SES was added to adjustment models. Results suggest that after adjustment for injury severity, age, comorbidity, and transfer status, disparities in SES across trauma center source populations do not influence trauma center performance evaluations in a system offering universal health coverage. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information
Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun
2016-01-01
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256
Zachariah, Justin P; Hwang, Susan; Hamburg, Naomi M; Benjamin, Emelia J; Larson, Martin G; Levy, Daniel; Vita, Joseph A; Sullivan, Lisa M; Mitchell, Gary F; Vasan, Ramachandran S
2016-02-01
Adipokines may be potential mediators of the association between excess adiposity and vascular dysfunction. We assessed the cross-sectional associations of circulating adipokines with vascular stiffness in a community-based cohort of younger adults. We related circulating concentrations of leptin and leptin receptor, adiponectin, retinol-binding protein 4, and fatty acid-binding protein 4 to vascular stiffness measured by arterial tonometry in 3505 Framingham Third Generation cohort participants free of cardiovascular disease (mean age 40 years, 53% women). Separate regression models estimated the relations of each adipokine to mean arterial pressure and aortic stiffness, as carotid femoral pulse wave velocity, adjusting for age, sex, smoking, heart rate, height, antihypertensive treatment, total and high-density lipoprotein cholesterol, diabetes mellitus, alcohol consumption, estimated glomerular filtration rate, glucose, and C-reactive protein. Models evaluating aortic stiffness also were adjusted for mean arterial pressure. Mean arterial pressure was positively associated with blood retinol-binding protein 4, fatty acid-binding protein 4, and leptin concentrations (all P<0.001) and inversely with adiponectin (P=0.002). In fully adjusted models, mean arterial pressure was positively associated with retinol-binding protein 4 and leptin receptor levels (P<0.002 both). In fully adjusted models, aortic stiffness was positively associated with fatty acid-binding protein 4 concentrations (P=0.02), but inversely with leptin and leptin receptor levels (P≤0.03 both). In our large community-based sample, circulating concentrations of select adipokines were associated with vascular stiffness measures, consistent with the hypothesis that adipokines may influence vascular function and may contribute to the relation between obesity and hypertension. © 2015 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Ji, S.; Yuan, X.
2016-06-01
A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.
Maciejewski, Matthew L; Liu, Chuan-Fen; Fihn, Stephan D
2009-01-01
To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R(2) statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. Administrative data-based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed.
Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram
2017-08-01
The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.
A causal examination of the effects of confounding factors on multimetric indices
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.
2013-01-01
The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.
An evaluation of gender equity in different models of primary care practices in Ontario
2010-01-01
Background The World Health Organization calls for more work evaluating the effect of health care reforms on gender equity in developed countries. We performed this evaluation in Ontario, Canada where primary care models resulting from reforms co-exist. Methods This cross sectional study of primary care practices uses data collected in 2005-2006. Healthcare service models included in the study consist of fee for service (FFS) based, salaried, and capitation based. We compared the quality of care delivered to women and men in practices of each model. We performed multi-level, multivariate regressions adjusting for patient socio-demographic and economic factors to evaluate vertical equity, and adjusting for these and health factors in evaluating horizontal equity. We measured seven dimensions of health service delivery (e.g. accessibility and continuity) and three dimensions of quality of care using patient surveys (n = 5,361) and chart abstractions (n = 4,108). Results Health service delivery measures were comparable in women and men, with differences ≤ 2.2% in all seven dimensions and in all models. Significant gender differences in the health promotion subjects addressed were observed. Female specific preventive manoeuvres were more likely to be performed than other preventive care. Men attending FFS practices were more likely to receive influenza immunization than women (Adjusted odds ratio: 1.75, 95% confidence intervals (CI) 1.05, 2.92). There was no difference in the other three prevention indicators. FFS practices were also more likely to provide recommended care for chronic diseases to men than women (Adjusted difference of -11.2%, CI -21.7, -0.8). A similar trend was observed in Community Health Centers (CHC). Conclusions The observed differences in the type of health promotion subjects discussed are likely an appropriate response to the differential healthcare needs between genders. Chronic disease care is non equitable in FFS but not in capitation based models. We recommend that efforts to monitor and address gender based differences in the delivery of chronic disease management in primary care be pursued. PMID:20331861
Valentine, William J; Van Brunt, Kate; Boye, Kristina S; Pollock, Richard F
2018-06-01
The aim of the present study was to evaluate the cost effectiveness of rapid-acting analog insulin relative to regular human insulin in adults with type 1 diabetes mellitus in Germany. The PRIME Diabetes Model, a patient-level, discrete event simulation model, was used to project long-term clinical and cost outcomes for patients with type 1 diabetes from the perspective of a German healthcare payer. Simulated patients had a mean age of 21.5 years, duration of diabetes of 8.6 years, and baseline glycosylated hemoglobin of 7.39%. Regular human insulin and rapid-acting analog insulin regimens reduced glycosylated hemoglobin by 0.312 and 0.402%, respectively. Compared with human insulin, hypoglycemia rate ratios with rapid-acting analog insulin were 0.51 (non-severe nocturnal) and 0.80 (severe). No differences in non-severe diurnal hypoglycemia were modeled. Discount rates of 3% were applied to future costs and clinical benefits accrued over the 50-year time horizon. In the base-case analysis, rapid-acting analog insulin was associated with an improvement in quality-adjusted life expectancy of 1.01 quality-adjusted life-years per patient (12.54 vs. 11.53 quality-adjusted life-years). Rapid-acting analog insulin was also associated with an increase in direct costs of €4490, resulting in an incremental cost-effectiveness ratio of €4427 per quality-adjusted life-year gained vs. human insulin. Sensitivity analyses showed that the base case was driven predominantly by differences in hypoglycemia; abolishing these differences reduced incremental quality-adjusted life expectancy to 0.07 quality-adjusted life-years, yielding an incremental cost-effectiveness ratio of €74,622 per quality-adjusted life-year gained. Rapid-acting analog insulin is associated with beneficial outcomes in patients with type 1 diabetes and is likely to be considered cost effective in the German setting vs. regular human insulin.
Twinomujuni, Cyprian; Nuwaha, Fred; Babirye, Juliet Ndimwibo
2015-01-01
Cervical cancer is one of the leading causes of cancer deaths among women globally and its impact is mostly felt in developing countries like Uganda where its prevalence is higher and utilization of cancer screening services is low. This study aimed to identify factors associated with intention to screen for cervical cancer among women of reproductive age in Masaka Uganda using the attitude, social influence and self efficacy (ASE) model. A descriptive community based survey was conducted among 416 women. A semi-structured interviewer administered questionnaire was used to collect data. Unadjusted and adjusted prevalence ratios (PR) were computed using a generalized linear model with Poisson family and a log link using STATA 12. Only 7% (29/416) of our study respondents had ever screened for cervical cancer although a higher proportion (63%, 262/416) reported intention to screen for cervical cancer. The intention to screen for cervical cancer was higher among those who said they were at risk of developing cervical cancer (Adjusted prevalence ratio [PR] 2.0, 95% CI 1.60-2.58), those who said they would refer other women for screening (Adjusted PR 1.4, 95% CI 1.06-1.88) and higher among those who were unafraid of being diagnosed with cervical cancer (Adjusted PR 1.6, 95% CI 1.36-1.93). Those who reported discussions on cervical cancer with health care providers (Adjusted PR 1.2, 95% CI 1.05-1.44), those living with a sexual partner (Adjusted PR 1.4, 95% CI 1.11-1.68), and those who were formally employed (Adjusted PR 1.2, 95% CI 1.03-1.35) more frequently reported intention to screen for cervical cancer. In conclusion, health education to increase risk perception, improve women's attitudes towards screening for cervical cancer and address the fears held by the women would increase intention to screen for cervical cancer. Interventions should also target increased discussions with health workers.
Shi, Chune; Fernando, H J S; Hyde, Peter
2012-02-01
Phoenix, Arizona, has been an ozone nonattainment area for the past several years and it remains so. Mitigation strategies call for improved modeling methodologies as well as understanding of ozone formation and destruction mechanisms during seasons of high ozone events. To this end, the efficacy of lateral boundary conditions (LBCs) based on satellite measurements (adjusted-LBCs) was investigated, vis-à-vis the default-LBCs, for improving the predictions of Models-3/CMAQ photochemical air quality modeling system. The model evaluations were conducted using hourly ground-level ozone and NO(2) concentrations as well as tropospheric NO(2) columns and ozone concentrations in the middle to upper troposphere, with the 'design' periods being June and July of 2006. Both included high ozone episodes, but the June (pre-monsoon) period was characterized by local thermal circulation whereas the July (monsoon) period by synoptic influence. Overall, improved simulations were noted for adjusted-LBC runs for ozone concentrations both at the ground-level and in the middle to upper troposphere, based on EPA-recommended model performance metrics. The probability of detection (POD) of ozone exceedances (>75ppb, 8-h averages) for the entire domain increased from 20.8% for the default-LBC run to 33.7% for the adjusted-LBC run. A process analysis of modeling results revealed that ozone within PBL during bulk of the pre-monsoon season is contributed by local photochemistry and vertical advection, while the contributions of horizontal and vertical advections are comparable in the monsoon season. The process analysis with adjusted-LBC runs confirms the contributions of vertical advection to episodic high ozone days, and hence elucidates the importance of improving predictability of upper levels with improved LBCs. Copyright © 2011 Elsevier B.V. All rights reserved.
Cross-cultural adjustment to the United States: the role of contextualized extraversion change
Liu, Mengqiao; Huang, Jason L.
2015-01-01
Personality traits can predict how well-sojourners and expatriates adjust to new cultures, but the adjustment process remains largely unexamined. Based on recent findings that reveal personality traits predict as well as respond to life events and experiences, this research focuses on within-person change in contextualized extraversion and its predictive validity for cross-cultural adjustment in international students who newly arrived in US colleges. We proposed that the initial level as well as the rate of change in school extraversion (i.e., contextualized extraversion that reflects behavioral tendency in school settings) will predict cross-cultural adjustment, withdrawal cognitions, and school satisfaction. Latent growth modeling of three-wave longitudinal surveys of 215 new international students (54% female, Mage = 24 years) revealed that the initial level of school extraversion significantly predicted cross-cultural adjustment, (lower) withdrawal cognitions, and satisfaction, while the rate of change (increase) in school extraversion predicted cross-cultural adjustment and (lower) withdrawal cognitions. We further modeled global extraversion and cross-cultural motivation as antecedents and explored within-person change in school extraversion as a proximal factor that affects adjustment outcomes. The findings highlight the malleability of contextualized personality, and more importantly, the importance of understanding within-person change in contextualized personality in a cross-cultural adjustment context. The study points to more research that explicate the process of personality change in other contexts. PMID:26579033
Cross-cultural adjustment to the United States: the role of contextualized extraversion change.
Liu, Mengqiao; Huang, Jason L
2015-01-01
Personality traits can predict how well-sojourners and expatriates adjust to new cultures, but the adjustment process remains largely unexamined. Based on recent findings that reveal personality traits predict as well as respond to life events and experiences, this research focuses on within-person change in contextualized extraversion and its predictive validity for cross-cultural adjustment in international students who newly arrived in US colleges. We proposed that the initial level as well as the rate of change in school extraversion (i.e., contextualized extraversion that reflects behavioral tendency in school settings) will predict cross-cultural adjustment, withdrawal cognitions, and school satisfaction. Latent growth modeling of three-wave longitudinal surveys of 215 new international students (54% female, M age = 24 years) revealed that the initial level of school extraversion significantly predicted cross-cultural adjustment, (lower) withdrawal cognitions, and satisfaction, while the rate of change (increase) in school extraversion predicted cross-cultural adjustment and (lower) withdrawal cognitions. We further modeled global extraversion and cross-cultural motivation as antecedents and explored within-person change in school extraversion as a proximal factor that affects adjustment outcomes. The findings highlight the malleability of contextualized personality, and more importantly, the importance of understanding within-person change in contextualized personality in a cross-cultural adjustment context. The study points to more research that explicate the process of personality change in other contexts.
Model-Based Economic Evaluation of Treatments for Depression: A Systematic Literature Review.
Kolovos, Spyros; Bosmans, Judith E; Riper, Heleen; Chevreul, Karine; Coupé, Veerle M H; van Tulder, Maurits W
2017-09-01
An increasing number of model-based studies that evaluate the cost effectiveness of treatments for depression are being published. These studies have different characteristics and use different simulation methods. We aimed to systematically review model-based studies evaluating the cost effectiveness of treatments for depression and examine which modelling technique is most appropriate for simulating the natural course of depression. The literature search was conducted in the databases PubMed, EMBASE and PsycInfo between 1 January 2002 and 1 October 2016. Studies were eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined criteria. This methodological review included 41 model-based studies, of which 21 used decision trees (DTs), 15 used cohort-based state-transition Markov models (CMMs), two used individual-based state-transition models (ISMs), and three used discrete-event simulation (DES) models. Just over half of the studies (54%) evaluated antidepressants compared with a control condition. The data sources, time horizons, cycle lengths, perspectives adopted and number of health states/events all varied widely between the included studies. DTs scored positively in four of the 11 criteria, CMMs in five, ISMs in six, and DES models in seven. There were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation of the course of depression. However, direct comparisons between the available modelling techniques are necessary to yield firm conclusions.
Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth
NASA Astrophysics Data System (ADS)
Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.
2017-12-01
We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.
Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.
Susan J. Alexander
1991-01-01
The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...
ERIC Educational Resources Information Center
Chung, Grace H.; Yoo, Joan P.
2013-01-01
The present study proposes a model of using the Multicultural Family Support Centers and adjustment among foreign brides and their interethnic and interracial families in South Korea based on the narratives of 10 foreign brides married to Korean men and 11 service providers who directly interact with these women and their families. The results…
ERIC Educational Resources Information Center
Marlowe, Mike
1979-01-01
A study investigated the effectiveness of a therapeutic motor development program in increasing the social adjustment and peer acceptance of a mainstreamed 10-year-old educable mentally retarded boy. The motor development program was based on the games analysis model and involved the S and 13 of his normal classmates. (Author/DLS)
Choy, Yun Ho; Mahboob, Alam; Cho, Chung Il; Choi, Jae Gwan; Choi, Im Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho
2015-01-01
The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A)-mode ultrasound carcass measures of backfat thickness (BF), eye muscle area (EMA), and retail cut percentage (RCP). Days to 90 kg body weight (DAYS90), through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP) based on their test day measures. The (co)variance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex) and contemporary groups (herd-year-month of birth) for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h2) estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h2 estimates of DAYS90 from model II and III were also somewhat similar. The h2 for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG) were moderately negative between DAYS90 and BF (−0.29 to −0.38), and between DAYS90 and EMA (−0.16 to −0.26). BF had strong rG with RCP (−0.87 to −0.93). Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28) and between EMA and RCP (0.35 to 0.44) among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the rG between DAYS90 and AEMA from model III (0.27 to 0.30). The rG between AEMA and ABF and between AEMA and ARCP were moderate but with negative and positive signs, respectively; also reflected influence of pre-adjustments. However, the rG between BF and RCP remained non-influential to trait pre-adjustments or covariable fits. Therefore, we conclude that ultrasound measures taken at a body weight of about 90 kg as the test final should be adjusted for body weight growth. Our adjustment formulas, particularly those for BF and EMA, should be revised further to accommodate the added variation due to different performance testing endpoints with regard to differential growth in body composition. PMID:26580436
Choy, Yun Ho; Mahboob, Alam; Cho, Chung Il; Choi, Jae Gwan; Choi, Im Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho
2015-12-01
The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A)-mode ultrasound carcass measures of backfat thickness (BF), eye muscle area (EMA), and retail cut percentage (RCP). Days to 90 kg body weight (DAYS90), through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP) based on their test day measures. The (co)variance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex) and contemporary groups (herd-year-month of birth) for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h(2)) estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h(2) estimates of DAYS90 from model II and III were also somewhat similar. The h(2) for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG) were moderately negative between DAYS90 and BF (-0.29 to -0.38), and between DAYS90 and EMA (-0.16 to -0.26). BF had strong rG with RCP (-0.87 to -0.93). Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28) and between EMA and RCP (0.35 to 0.44) among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the rG between DAYS90 and AEMA from model III (0.27 to 0.30). The rG between AEMA and ABF and between AEMA and ARCP were moderate but with negative and positive signs, respectively; also reflected influence of pre-adjustments. However, the rG between BF and RCP remained non-influential to trait pre-adjustments or covariable fits. Therefore, we conclude that ultrasound measures taken at a body weight of about 90 kg as the test final should be adjusted for body weight growth. Our adjustment formulas, particularly those for BF and EMA, should be revised further to accommodate the added variation due to different performance testing endpoints with regard to differential growth in body composition.
Perez-Rodriguez, M Mercedes; Garcia-Nieto, Rebeca; Fernandez-Navarro, Pablo; Galfalvy, Hanga; de Leon, Jose; Baca-Garcia, Enrique
2012-01-01
Objectives To investigate the trends and correlations of gross domestic product (GDP) adjusted for purchasing power parity (PPP) per capita on suicide rates in 10 WHO regions during the past 30 years. Design Analyses of databases of PPP-adjusted GDP per capita and suicide rates. Countries were grouped according to the Global Burden of Disease regional classification system. Data sources World Bank's official website and WHO's mortality database. Statistical analyses After graphically displaying PPP-adjusted GDP per capita and suicide rates, mixed effect models were used for representing and analysing clustered data. Results Three different groups of countries, based on the correlation between the PPP-adjusted GDP per capita and suicide rates, are reported: (1) positive correlation: developing (lower middle and upper middle income) Latin-American and Caribbean countries, developing countries in the South East Asian Region including India, some countries in the Western Pacific Region (such as China and South Korea) and high-income Asian countries, including Japan; (2) negative correlation: high-income and developing European countries, Canada, Australia and New Zealand and (3) no correlation was found in an African country. Conclusions PPP-adjusted GDP per capita may offer a simple measure for designing the type of preventive interventions aimed at lowering suicide rates that can be used across countries. Public health interventions might be more suitable for developing countries. In high-income countries, however, preventive measures based on the medical model might prove more useful. PMID:22586285
Block, H C; Klopfenstein, T J; Erickson, G E
2006-04-01
Two data sets were developed to evaluate and refine feed energy predictions with the beef National Research Council (NRC, 1996) model level 1. The first data set included pen means of group-fed cattle from 31 growing trials (201 observations) and 17 finishing trials (154 observations) representing over 7,700 animals fed outside in dirt lots. The second data set consisted of 15 studies with individually fed cattle (916 observations) fed in a barn. In each data set, actual ADG was compared with ADG predicted with the NRC model level 1, assuming thermoneutral environmental conditions. Next, the observed ADG (kg), TDN intake (kg/d), and TDN concentration (kg/kg of DM) were used to develop equations to adjust the level 1 predicted diet NEm and NEg (diet NE adjusters) to be applied to more accurately predict ADG. In both data sets, the NRC (1996) model level 1 inaccurately predicted ADG (P < 0.001 for slope = 1; intercept = 0 when observed ADG was regressed on predicted ADG). The following nonlinear relationships to adjust NE based on observed ADG, TDN intake, and TDN concentration were all significant (P < 0.001): NE adjuster = 0.7011 x 10(-0.8562 x ADG) + 0.8042, R2 = 0.325, s(y.x) = 0.136 kg; NE adjuster = 4.795 10(-0.3689 x TDN intake) + 0.8233, R2 x = 0.714, s(y.x) = 0.157 kg; and NE adjuster = 357 x 10(-5.449 x TDN concentration) + 0.8138, R2 = 0.754, s(y.x) = 0.127 kg. An NE adjuster < 1 indicates overprediction of ADG. The average NE adjustment required for the pen-fed finishing trials was 0.820, whereas the (P < 0.001) adjustment of 0.906 for individually fed cattle indicates that the pen-fed environment increased NE requirements. The use of these equations should improve ADG prediction by the NRC (1996) model level 1, although the equations reflect limitations of the data from which they were developed and are appropriate only over the range of the developmental data set. There is a need for independent evaluation of the ability of the equations to improve ADG prediction by the NRC (1996) model level 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason
During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Mechanical design and analysis of focal plate for gravity deformation
NASA Astrophysics Data System (ADS)
Wang, Jianping; Chu, Jiaru; Hu, Hongzhuan; Li, Kexuan; Zhou, Zengxiang
2014-07-01
The surface accuracy of astronomical telescope focal plate is a key indicator to precision stellar observation. To conduct accurate deformation measurement for focal plate in different status, a 6-DOF hexapod platform was used for attitude adjustment. For the small adjustment range of a classic 6-DOF hexapod platform, an improved structural arrangement method was proposed in the paper to achieve ultimate adjustment of the focal plate in horizontal and vertical direction. To validate the feasibility of this method, an angle change model which used ball hinge was set up for the movement and base plate. Simulation results in MATLAB suggested that the ball hinge angle change of movement and base plate is within the range of the limiting angle in the process of the platform plate adjusting to ultimate attitude. The proposed method has some guiding significance for accurate surface measurement of focal plate.
Li, Bo; Zhang, Lian-Jun; Guo, Li-Wei; Fu, Ting-Ming; Zhu, Hua-Xu
2014-01-01
To optimize the pretreatment of Huanglian Jiedu decoction before ceramic membranes and verify the effect of different pretreatments in multiple model system existed in Chinese herb aqueous extract. The solution environment of Huanglian Jiedu decoction was adjusted by different pretreatments. The flux of microfiltration, transmittance of the ingredients and removal rate of common polymers were as indicators to study the effect of different solution environment It was found that flocculation had higher stable permeate flux, followed by vacuuming filtration and adjusting pH to 9. The removal rate of common polymers was comparatively high. The removal rate of protein was slightly lower than the simulated solution. The transmittance of index components were higher when adjust pH and flocculation. Membrane blocking resistance was the major factor in membrane fouling. Based on the above indicators, the effect of flocculation was comparatively significant, followed by adjusting pH to 9.
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Assessing Medicare's Approach To Covering New Drugs In Bundled Payments For Oncology.
Muldoon, L Daniel; Pelizzari, Pamela M; Lang, Kelsey A; Vandigo, Joe; Pyenson, Bruce S
2018-05-01
New oncology therapies can contribute to survival or quality of life, but payers and policy makers have raised concerns about the cost of these therapies. Similar concerns extend beyond cancer. In seeking a solution, payers are increasingly turning toward value-based payment models in which providers take financial risk for costs and outcomes. These models, including episode payment and bundled payment, create financial gains for providers who reduce cost, but they also create concerns about potential stinting on necessary treatments. One approach, which the Centers for Medicare and Medicaid Services adopted in the Oncology Care Model (OCM), is to partially adjust medical practices' budgets for their use of novel therapies, defined in this case as new oncology drugs or new indications for existing drugs approved after December 31, 2014. In an analysis of the OCM novel therapies adjustment using historical Medicare claims data, we found that the adjustment may provide important financial protection for practices. In a simulation we performed, the adjustment reduced the average loss per treatment episode by $758 (from $807 to $49) for large practices that use novel therapies often. Lessons from the OCM can have implications for other alternative payment models.
Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo
2013-01-01
The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.
The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo
2017-02-01
The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.
Hocking, Matthew C.; McCurdy, Mark; Turner, Elise; Kazak, Anne E.; Noll, Robert B.; Phillips, Peter; Barakat, Lamia P.
2014-01-01
Pediatric brain tumor (BT) survivors are at risk for psychosocial late effects across many domains of functioning, including neurocognitive and social. The literature on the social competence of pediatric BT survivors is still developing and future research is needed that integrates developmental and cognitive neuroscience research methodologies to identify predictors of survivor social adjustment and interventions to ameliorate problems. This review discusses the current literature on survivor social functioning through a model of social competence in childhood brain disorder and suggests future directions based on this model. Interventions pursuing change in survivor social adjustment should consider targeting social ecological factors. PMID:25382825
Sung, Sheng-Feng; Hsieh, Cheng-Yang; Kao Yang, Yea-Huei; Lin, Huey-Juan; Chen, Chih-Hung; Chen, Yu-Wei; Hu, Ya-Han
2015-11-01
Case-mix adjustment is difficult for stroke outcome studies using administrative data. However, relevant prescription, laboratory, procedure, and service claims might be surrogates for stroke severity. This study proposes a method for developing a stroke severity index (SSI) by using administrative data. We identified 3,577 patients with acute ischemic stroke from a hospital-based registry and analyzed claims data with plenty of features. Stroke severity was measured using the National Institutes of Health Stroke Scale (NIHSS). We used two data mining methods and conventional multiple linear regression (MLR) to develop prediction models, comparing the model performance according to the Pearson correlation coefficient between the SSI and the NIHSS. We validated these models in four independent cohorts by using hospital-based registry data linked to a nationwide administrative database. We identified seven predictive features and developed three models. The k-nearest neighbor model (correlation coefficient, 0.743; 95% confidence interval: 0.737, 0.749) performed slightly better than the MLR model (0.742; 0.736, 0.747), followed by the regression tree model (0.737; 0.731, 0.742). In the validation cohorts, the correlation coefficients were between 0.677 and 0.725 for all three models. The claims-based SSI enables adjusting for disease severity in stroke studies using administrative data. Copyright © 2015 Elsevier Inc. All rights reserved.
Phenologically-tuned MODIS NDVI-based production anomaly estimates for Zimbabwe
Funk, Chris; Budde, Michael E.
2009-01-01
For thirty years, simple crop water balance models have been used by the early warning community to monitor agricultural drought. These models estimate and accumulate actual crop evapotranspiration, evaluating environmental conditions based on crop water requirements. Unlike seasonal rainfall totals, these models take into account the phenology of the crop, emphasizing conditions during the peak grain filling phase of crop growth. In this paper we describe an analogous metric of crop performance based on time series of Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) imagery. A special temporal filter is used to screen for cloud contamination. Regional NDVI time series are then composited for cultivated areas, and adjusted temporally according to the timing of the rainy season. This adjustment standardizes the NDVI response vis-??-vis the expected phenological response of maize. A national time series index is then created by taking the cropped-area weighted average of the regional series. This national time series provides an effective summary of vegetation response in agricultural areas, and allows for the identification of NDVI green-up during grain filling. Onset-adjusted NDVI values following the grain filling period are well correlated with U.S. Department of Agriculture production figures, possess desirable linear characteristics, and perform better than more common indices such as maximum seasonal NDVI or seasonally averaged NDVI. Thus, just as appropriately calibrated crop water balance models can provide more information than seasonal rainfall totals, the appropriate agro-phenological filtering of NDVI can improve the utility and accuracy of space-based agricultural monitoring.
Quality of workplace social relationships and perceived health.
Rydstedt, Leif W; Head, Jenny; Stansfeld, Stephen A; Woodley-Jones, Davina
2012-06-01
Associations between the quality of social relationships at work and mental and self-reported health were examined to assess whether these associations were independent of job strain. The study was based on cross-sectional survey data from 728 employees (response rate 58%) and included the Demand-Control-(Support) (DC-S) model, six items on the quality of social relationships at the workplace, the General Health Questionnaire (30), and an item on self-reported physical health. Logistic regression analyses were used. A first set of models were run with adjustment for age, sex, and socioeconomic group. A second set of models were run adjusted for the dimensions of the DC-S model. Positive associations were found between the quality of social relationships and mental health as well as self-rated physical health, and these associations remained significant even after adjustment for the dimensions. The findings add support to the Health and Safety Executive stress management standards on social relationships at the workplace.
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Gurnani, Ashita S; John, Samantha E; Gavett, Brandon E
2015-05-01
The current study developed regression-based normative adjustments for a bi-factor model of the The Brief Test of Adult Cognition by Telephone (BTACT). Archival data from the Midlife Development in the United States-II Cognitive Project were used to develop eight separate linear regression models that predicted bi-factor BTACT scores, accounting for age, education, gender, and occupation-alone and in various combinations. All regression models provided statistically significant fit to the data. A three-predictor regression model fit best and accounted for 32.8% of the variance in the global bi-factor BTACT score. The fit of the regression models was not improved by gender. Eight different regression models are presented to allow the user flexibility in applying demographic corrections to the bi-factor BTACT scores. Occupation corrections, while not widely used, may provide useful demographic adjustments for adult populations or for those individuals who have attained an occupational status not commensurate with expected educational attainment. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Design of motion adjusting system for space camera based on ultrasonic motor
NASA Astrophysics Data System (ADS)
Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan
2011-08-01
Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Bataille, Christopher G. F.
2005-11-01
Are further energy efficiency gains, or more recently greenhouse gas reductions, expensive or cheap? Analysts provide conflicting advice to policy makers based on divergent modelling perspectives, a 'top-down/bottom-up debate' in which economists use equation based models that equilibrate markets by maximizing consumer welfare, and technologists use technology simulation models that minimize the financial cost of providing energy services. This thesis summarizes a long term research project to find a middle ground between these two positions that is more useful to policy makers. Starting with the individual components of a behaviourally realistic and technologically explicit simulation model (ISTUM---Inter Sectoral Technology Use Model), or "hybrid", the individual sectors of the economy are linked using a framework of micro and macro economic feedbacks. These feedbacks are taken from the economic theory that informs the computable general equilibrium (CGE) family of models. Speaking in the languages of both economists and engineers, the resulting "physical" equilibrium model of Canada (CIMS---Canadian Integrated Modeling System), equilibrates energy and end-product markets, including imports and exports, for seven regions and 15 economic sectors, including primary industry, manufacturing, transportation, commerce, residences, governmental infrastructure and the energy supply sectors. Several different policy experiments demonstrate the value-added of the model and how its results compare to top-down and bottom-up practice. In general, the results show that technical adjustments make up about half the response to simulated energy policy, and macroeconomic demand adjustments the other half. Induced technical adjustments predominate with minor policies, while the importance of macroeconomic demand adjustment increases with the strength of the policy. Results are also shown for an experiment to derive estimates of future elasticity of substitution (ESUB) and autonomous energy efficiency indices (AEEI) from the model, parameters that could be used in long-run computable general equilibrium (CGE) analysis. The thesis concludes with a summary of the strengths and weakness of the new model as a policy tool, a work plan for its further improvement, and a discussion of the general potential for technologically explicit general equilibrium modelling.
NASA Astrophysics Data System (ADS)
Easton, Z. M.; Fuka, D.; Collick, A.; Kleinman, P. J. A.; Auerbach, D.; Sommerlot, A.; Wagena, M. B.
2015-12-01
Topography exerts critical controls on many hydrologic, geomorphologic, and environmental biophysical processes. Unfortunately many watershed modeling systems use topography only to define basin boundaries and stream channels and do not explicitly account for the topographic controls on processes such as soil genesis, soil moisture distributions and hydrological response. We develop and demonstrate a method that uses topography to spatially adjust soil morphological and soil hydrological attributes [soil texture, depth to the C-horizon, saturated conductivity, bulk density, porosity, and the field capacities at 33kpa (~ field capacity) and 1500kpa (~ wilting point) tensions]. In order to test the performance of the method the topographical adjusted soils and standard SSURGO soil (available at 1:20,000 scale) were overlaid on soil pedon pit data in the Grasslands Soil and Water Research Lab in Resiel, TX. The topographically adjusted soils exhibited significant correlations with measurements from the soil pits, while the SSURGO soil data showed almost no correlation to measured data. We also applied the method to the Grasslands Soil and Water Research watershed using the Soil and Water Assessment Tool (SWAT) model to 15 separate fields as a proxy to propagate changes in soil properties into field scale hydrological responses. Results of this test showed that the topographically adjusted soils resulted better model predictions of field runoff in 50% of the field, with the SSURGO soils preforming better in the remainder of the fields. However, the topographically adjusted soils generally predicted baseflow response more accurately, reflecting the influence of these soil properties on non-storm responses. These results indicate that adjusting soil properties based on topography can result in more accurate soil characterization and, in some cases improve model performance.
Bayesian effect estimation accounting for adjustment uncertainty.
Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca
2012-09-01
Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.
Study of Personalized Network Tutoring System Based on Emotional-cognitive Interaction
NASA Astrophysics Data System (ADS)
Qi, Manfei; Ma, Ding; Wang, Wansen
Aiming at emotion deficiency in present Network tutoring system, a lot of negative effects is analyzed and corresponding countermeasures are proposed. The model of Personalized Network tutoring system based on Emotional-cognitive interaction is constructed in the paper. The key techniques of realizing the system such as constructing emotional model and adjusting teaching strategies are also introduced.
Error analysis of mechanical system and wavelength calibration of monochromator
NASA Astrophysics Data System (ADS)
Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong
2018-02-01
This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.
A three-stage birandom program for unit commitment with wind power uncertainty.
Zhang, Na; Li, Weidong; Liu, Rao; Lv, Quan; Sun, Liang
2014-01-01
The integration of large-scale wind power adds a significant uncertainty to power system planning and operating. The wind forecast error is decreased with the forecast horizon, particularly when it is from one day to several hours ahead. Integrating intraday unit commitment (UC) adjustment process based on updated ultra-short term wind forecast information is one way to improve the dispatching results. A novel three-stage UC decision method, in which the day-ahead UC decisions are determined in the first stage, the intraday UC adjustment decisions of subfast start units are determined in the second stage, and the UC decisions of fast-start units and dispatching decisions are determined in the third stage is presented. Accordingly, a three-stage birandom UC model is presented, in which the intraday hours-ahead forecasted wind power is formulated as a birandom variable, and the intraday UC adjustment event is formulated as a birandom event. The equilibrium chance constraint is employed to ensure the reliability requirement. A birandom simulation based hybrid genetic algorithm is designed to solve the proposed model. Some computational results indicate that the proposed model provides UC decisions with lower expected total costs.
Independent Verification of Mars-GRAM 2010 with Mars Climate Sounder Data
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Burns, Kerry L.
2014-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission and engineering applications. Applications of Mars-GRAM include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Atmospheric influences on landing site selection and long-term mission conceptualization and development can also be addressed utilizing Mars-GRAM. Mars-GRAM's perturbation modeling capability is commonly used, in a Monte Carlo mode, to perform high-fidelity engineering end-to-end simulations for entry, descent, and landing. Mars-GRAM is an evolving software package resulting in improved accuracy and additional features. Mars-GRAM 2005 has been validated against Radio Science data, and both nadir and limb data from the Thermal Emission Spectrometer (TES). From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). Above 80 km, Mars-GRAM is based on the University of Michigan Mars Thermospheric General Circulation Model (MTGCM). The most recent release of Mars-GRAM 2010 includes an update to Fortran 90/95 and the addition of adjustment factors. These adjustment factors are applied to the input data from the MGCM and the MTGCM for the mapping year 0 user-controlled dust case. The adjustment factors are expressed as a function of height (z), latitude and areocentric solar longitude (Ls).
Wang, Chi-Chuan; Lin, Chia-Hui; Lin, Kuan-Yin; Chuang, Yu-Chung; Sheng, Wang-Huei
2016-01-01
Abstract Community-acquired pneumonia (CAP) is a common but potentially life-threatening condition, but limited information exists on the effectiveness of fluoroquinolones compared to β-lactams in outpatient settings. We aimed to compare the effectiveness and outcomes of penicillins versus respiratory fluoroquinolones for CAP at outpatient clinics. This was a claim-based retrospective cohort study. Patients aged 20 years or older with at least 1 new pneumonia treatment episode were included, and the index penicillin or respiratory fluoroquinolone therapies for a pneumonia episode were at least 5 days in duration. The 2 groups were matched by propensity scores. Cox proportional hazard models were used to compare the rates of hospitalizations/emergence service visits and 30-day mortality. A logistic model was used to compare the likelihood of treatment failure between the 2 groups. After propensity score matching, 2622 matched pairs were included in the final model. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy (adjusted odds ratio [AOR], 0.88; 95% confidence interval [95%CI], 0.77–0.99), but no differences were found in hospitalization/emergence service (ES) visits (adjusted hazard ratio [HR], 1.27; 95% CI, 0.92–1.74) and 30-day mortality (adjusted HR, 0.69; 95% CI, 0.30–1.62) between the 2 groups. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy for CAP on an outpatient clinic basis. However, this effect may be marginal. Further investigation into the comparative effectiveness of these 2 treatment options is warranted. PMID:26871827
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Performance of diagnosis-based risk adjustment measures in a population of sick Australians.
Duckett, S J; Agius, P A
2002-12-01
Australia is beginning to explore 'managed competition' as an organising framework for the health care system. This requires setting fair capitation rates, i.e. rates that adjust for the risk profile of covered lives. This paper tests two US-developed risk adjustment approaches using Australian data. Data from the 'co-ordinated care' dataset (which incorporates all service costs of 16,538 participants in a large health service research project conducted in 1996-99) were grouped into homogenous risk categories using risk adjustment 'grouper software'. The grouper products yielded three sets of homogenous categories: Diagnostic Groups and Diagnostic cost Groups. A two-stage analysis of predictive power was used: probability of any service use in the concurrent year, next year and the year after (logistic regression) and, for service users, a regression of logged cost of service use. The independent variables were diagnosis gender, a SES variable and the Age, gender and diagnosis-based risk adjustment measures explain around 40-45% of variation in costs of service use in the current year for untrimmed data (compared with around 15% for age and gender alone). Prediction of subsequent use is much poorer (around 20%). Using more information to assign people to risk categories generally improves prediction. Predictive power of diagnosis-base risk adjusters on this Australian dataset is similar to that found in Low predictive power carries policy risks of cream skimming rather than managing population health and care. Competitive funding models with risk adjustment on prior year experience could reduce system efficiency if implemented with current risk adjustment technology.
The AFIS tree growth model for updating annual forest inventories in Minnesota
Margaret R. Holdaway
2000-01-01
As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...
Pfeiffer, R M; Riedl, R
2015-08-15
We assess the asymptotic bias of estimates of exposure effects conditional on covariates when summary scores of confounders, instead of the confounders themselves, are used to analyze observational data. First, we study regression models for cohort data that are adjusted for summary scores. Second, we derive the asymptotic bias for case-control studies when cases and controls are matched on a summary score, and then analyzed either using conditional logistic regression or by unconditional logistic regression adjusted for the summary score. Two scores, the propensity score (PS) and the disease risk score (DRS) are studied in detail. For cohort analysis, when regression models are adjusted for the PS, the estimated conditional treatment effect is unbiased only for linear models, or at the null for non-linear models. Adjustment of cohort data for DRS yields unbiased estimates only for linear regression; all other estimates of exposure effects are biased. Matching cases and controls on DRS and analyzing them using conditional logistic regression yields unbiased estimates of exposure effect, whereas adjusting for the DRS in unconditional logistic regression yields biased estimates, even under the null hypothesis of no association. Matching cases and controls on the PS yield unbiased estimates only under the null for both conditional and unconditional logistic regression, adjusted for the PS. We study the bias for various confounding scenarios and compare our asymptotic results with those from simulations with limited sample sizes. To create realistic correlations among multiple confounders, we also based simulations on a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.
Huesch, Marco D; Currid-Halkett, Elizabeth; Doctor, Jason N
2014-05-01
Prelabor cesareans in women without a prior cesarean is an important quality measure, yet one that is seldom tracked. We estimated patient-level risks and calculated how sensitive hospital rankings on this proposed quality metric were to risk adjustment. This retrospective cohort study linked Californian patient data from the Agency for Healthcare Research and Quality with hospital-level operational and financial data. Using the outcome of primary prelabor cesarean, we estimated patient-level logistic regressions in progressively more detailed models. We assessed incremental fit and discrimination, and aggregated the predicted patient-level event probabilities to construct hospital-level rankings. Of 408,355 deliveries by women without prior cesareans at 254 hospitals, 11.0% were prelabor cesareans. Including age, ethnicity, race, insurance, weekend and unscheduled admission, and 12 well-known patient risk factors yielded a model c-statistic of 0.83. Further maternal comorbidities, and hospital and obstetric unit characteristics only marginally improved fit. Risk adjusting hospital rankings led to a median absolute change in rank of 44 places compared to rankings based on observed rates. Of the 48 (49) hospitals identified as in the best (worst) quintile on observed rates, only 23 (18) were so identified by the risk-adjusted model. Models predict primary prelabor cesareans with good discrimination. Systematic hospital-level variation in patient risk factors requires risk adjustment to avoid considerably different classification of hospitals by outcome performance. An opportunity exists to define this metric and report such risk-adjusted outcomes to stakeholders. Copyright © 2014 Mosby, Inc. All rights reserved.
Maciejewski, Matthew L.; Liu, Chuan-Fen; Fihn, Stephan D.
2009-01-01
OBJECTIVE—To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. RESEARCH DESIGN AND METHODS—This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R2 statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. RESULTS—Administrative data–based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. CONCLUSIONS—Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed. PMID:18945927
TREATMENT SWITCHING: STATISTICAL AND DECISION-MAKING CHALLENGES AND APPROACHES.
Latimer, Nicholas R; Henshall, Chris; Siebert, Uwe; Bell, Helen
2016-01-01
Treatment switching refers to the situation in a randomized controlled trial where patients switch from their randomly assigned treatment onto an alternative. Often, switching is from the control group onto the experimental treatment. In this instance, a standard intention-to-treat analysis does not identify the true comparative effectiveness of the treatments under investigation. We aim to describe statistical methods for adjusting for treatment switching in a comprehensible way for nonstatisticians, and to summarize views on these methods expressed by stakeholders at the 2014 Adelaide International Workshop on Treatment Switching in Clinical Trials. We describe three statistical methods used to adjust for treatment switching: marginal structural models, two-stage adjustment, and rank preserving structural failure time models. We draw upon discussion heard at the Adelaide International Workshop to explore the views of stakeholders on the acceptability of these methods. Stakeholders noted that adjustment methods are based on assumptions, the validity of which may often be questionable. There was disagreement on the acceptability of adjustment methods, but consensus that when these are used, they should be justified rigorously. The utility of adjustment methods depends upon the decision being made and the processes used by the decision-maker. Treatment switching makes estimating the true comparative effect of a new treatment challenging. However, many decision-makers have reservations with adjustment methods. These, and how they affect the utility of adjustment methods, require further exploration. Further technical work is required to develop adjustment methods to meet real world needs, to enhance their acceptability to decision-makers.
Bailit, Jennifer L; Grobman, William A; Rice, Madeline Murguia; Spong, Catherine Y; Wapner, Ronald J; Varner, Michael W; Thorp, John M; Leveno, Kenneth J; Caritis, Steve N; Shubert, Phillip J; Tita, Alan T; Saade, George; Sorokin, Yoram; Rouse, Dwight J; Blackwell, Sean C; Tolosa, Jorge E; Van Dorsten, J Peter
2013-11-01
Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take preexisting patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for 5 obstetric outcomes and assess hospital performance across these outcomes. We studied a cohort of 115,502 women and their neonates born in 25 hospitals in the United States from March 2008 through February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Venous thromboembolism occurred too infrequently (0.03%; 95% confidence interval [CI], 0.02-0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage, 2.29%; 95% CI, 2.20-2.38, peripartum infection, 5.06%; 95% CI, 4.93-5.19, severe perineal laceration at spontaneous vaginal delivery, 2.16%; 95% CI, 2.06-2.27, neonatal composite, 2.73%; 95% CI, 2.63-2.84). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital-adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. Copyright © 2013 Mosby, Inc. All rights reserved.
Cranford, James A.; Floyd, Frank J.; Schulenberg, John E.; Zucker, Robert A.
2011-01-01
This longitudinal study tested the hypothesis that marital interactions mediate the associations between wives’ and husbands’ lifetime alcoholism status and their subsequent marital adjustment. Participants were 105 couples from the Michigan Longitudinal Study (MLS), an ongoing multimethod investigation of substance use in a community-based sample of alcoholics, nonalcoholics, and their families. At baseline (T1), husbands and wives completed a series of diagnostic measures and lifetime DSM-IV diagnosis of alcohol use disorder (AUD) was assessed. Couples completed a problem-solving marital interaction task 3 years later at T2, which was coded for the ratio of positive to negative behaviors (P/N) was calculated. Couples also completed the Dyadic Adjustment Scale (DAS; Spanier, 1976) at T4 (9 years after T1 and 6 years after T2). Moderate to strong positive correlations were observed between husbands’ and wives’ lifetime AUD, P/N ratio, and dyadic adjustment. Based on an Actor-Partner Independence Model (APIM) framework, results from structural equation modeling showed that husbands’ lifetime AUD was negatively associated with wives’ P/N ratio at the 3 year point, but was not related to their own or their wives’ marital adjustment 9 years from baseline. However, wives’ lifetime AUD had direct negative associations with their own and their husband’s marital satisfaction 9 years later, and wives’ P/N ratio was positively related to their own and their husband’s marital satisfaction 6 years later. Results indicate that marital adjustment in alcoholic couples may be driven more by the wives’ than the husbands’ AUD and marital behavior. PMID:21133510
The evolution of reputation-based partner-switching behaviors with a cost
Li, Yixiao
2014-01-01
Humans constantly adjust their social relationships and choose new partners of good reputations, thereby promoting the evolution of cooperation. Individuals have to pay a cost to build a reputation, obtain others' information and then make partnership adjustments, yet the conditions under which such costly behaviors are able to evolve remain to be explored. In this model, I assume that individuals have to pay a cost to adjust their partnerships. Furthermore, whether an individual can adjust his partnership based on reputation is determined by his strategic preference, which is updated via coevolution. Using the metaphor of a public goods game where the collective benefit is shared among all members of a group, the coupling dynamics of cooperation and partnership adjustment were numerically simulated. Partner-switching behavior cannot evolve in a public goods game with a low amplification factor. However, such an effect can be exempted by raising the productivity of public goods or the frequency of partnership adjustment. Moreover, costly partner-switching behavior is remarkably promoted by the condition that the mechanism of reputation evaluation considers its prosociality. A mechanism of reputation evaluation that praises both cooperative and partner-switching behaviors allows them to coevolve. PMID:25091006
The evolution of reputation-based partner-switching behaviors with a cost
NASA Astrophysics Data System (ADS)
Li, Yixiao
2014-08-01
Humans constantly adjust their social relationships and choose new partners of good reputations, thereby promoting the evolution of cooperation. Individuals have to pay a cost to build a reputation, obtain others' information and then make partnership adjustments, yet the conditions under which such costly behaviors are able to evolve remain to be explored. In this model, I assume that individuals have to pay a cost to adjust their partnerships. Furthermore, whether an individual can adjust his partnership based on reputation is determined by his strategic preference, which is updated via coevolution. Using the metaphor of a public goods game where the collective benefit is shared among all members of a group, the coupling dynamics of cooperation and partnership adjustment were numerically simulated. Partner-switching behavior cannot evolve in a public goods game with a low amplification factor. However, such an effect can be exempted by raising the productivity of public goods or the frequency of partnership adjustment. Moreover, costly partner-switching behavior is remarkably promoted by the condition that the mechanism of reputation evaluation considers its prosociality. A mechanism of reputation evaluation that praises both cooperative and partner-switching behaviors allows them to coevolve.
Enfield, Kyle B; Schafer, Katherine; Zlupko, Mike; Herasevich, Vitaly; Novicoff, Wendy M; Gajic, Ognjen; Hoke, Tracey R; Truwit, Jonathon D
2012-01-01
Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8%) and 24.6% (95% CI 22.7%, 26.5%) (t-test p-value<0.001). The r(2) for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results suggest this finding can not be applied globally to patients admitted to intensive care units. As patients and providers increasingly use publicly reported information in making health care decisions and referrals, it is critical that the provided information be understood. Our results suggest that severity of illness may influence the mortality index in administrative models. We suggest that when interpreting "report cards" or metrics, health care providers determine how the risk adjustment was made and compares to other risk adjustment models.
Modeling erosion under future climates with the WEPP model
Timothy Bayley; William Elliot; Mark A. Nearing; D. Phillp Guertin; Thomas Johnson; David Goodrich; Dennis Flanagan
2010-01-01
The Water Erosion Prediction Project Climate Assessment Tool (WEPPCAT) was developed to be an easy-to-use, web-based erosion model that allows users to adjust climate inputs for user-specified climate scenarios. WEPPCAT allows the user to modify monthly mean climate parameters, including maximum and minimum temperatures, number of wet days, precipitation, and...
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Lee, Kwangsoo; Lee, Sangil
2007-05-01
This study explored the effects of the diagnosis-related group (DRG)-based prospective payment system (PPS) operated by voluntarily participating organizations on the cesarean section (CS) rates, and analyzed whether the participating health care organizations had similar CS rates despite the varied participation periods. The study sample included delivery claims data from the Korean national health insurance program for the year 2003. Risk factors were identified and used in the adjustment model to distinguish the main reason for CS. Their risk-adjusted CS rates were compared by the reimbursement methods, and the organizations' internal and external environments were controlled. The final risk-adjustment model for the CS rates meets the criteria for an effective model. There were no significant differences of CS rates between providers in the DRG and fee-for-service system after controlling for organizational variables. The CS rates did not vary significantly depending on the providers' DRG participation periods. The results provide evidence that the DRG payment system operated by volunteering health care organizations had no impact on the CS rates, which can lower the quality of care. Although the providers joined the DRG system in different years, there were no differences in the CS rates among the DRG providers. These results support the future expansion of the DRG-based PPS plan to all health care services in Korea.
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
Calhoun, William J.; Ameredes, Bill T.; King, Tonya S.; Icitovic, Nikolina; Bleecker, Eugene R.; Castro, Mario; Cherniack, Reuben M.; Chinchilli, Vernon M.; Craig, Timothy; Denlinger, Loren; DiMango, Emily A.; Engle, Linda L.; Fahy, John V.; Grant, J. Andrew; Israel, Elliot; Jarjour, Nizar; Kazani, Shamsah D.; Kraft, Monica; Kunselman, Susan J.; Lazarus, Stephen C.; Lemanske, Robert F.; Lugogo, Njira; Martin, Richard J.; Meyers, Deborah A.; Moore, Wendy C.; Pascual, Rodolfo; Peters, Stephen P.; Ramsdell, Joe; Sorkness, Christine A.; Sutherland, E. Rand; Szefler, Stanley J.; Wasserman, Stephen I.; Walter, Michael J.; Wechsler, Michael E.; Boushey, Homer A.
2013-01-01
Context No consensus exists for adjusting inhaled corticosteroid therapy in patients with asthma. Approaches include adjustment at outpatient visits guided by physician assessment of asthma control (symptoms, rescue therapy, pulmonary function), based on exhaled nitric oxide, or on a day-to-day basis guided by symptoms. Objective To determine if adjustment of inhaled corticosteroid therapy based on exhaled nitric oxide or day-to-day symptoms is superior to guideline-informed, physician assessment–based adjustment in preventing treatment failure in adults with mild to moderate asthma. Design, Setting, and Participants A randomized, parallel, 3-group, placebo-controlled, multiply-blinded trial of 342 adults with mild to moderate asthma controlled by low-dose inhaled corticosteroid therapy (n=114 assigned to physician assessment–based adjustment [101 completed], n=115 to biomarker-based [exhaled nitric oxide] adjustment [92 completed], and n=113 to symptom-based adjustment [97 completed]), the Best Adjustment Strategy for Asthma in the Long Term (BASALT) trial was conducted by the Asthma Clinical Research Network at 10 academic medical centers in the United States for 9 months between June 2007 and July 2010. Interventions For physician assessment–based adjustment and biomarker-based (exhaled nitric oxide) adjustment, the dose of inhaled corticosteroids was adjusted every 6 weeks; for symptom-based adjustment, inhaled corticosteroids were taken with each albuterol rescue use. Main Outcome Measure The primary outcome was time to treatment failure. Results There were no significant differences in time to treatment failure. The 9-month Kaplan-Meier failure rates were 22% (97.5% CI, 14%-33%; 24 events) for physician assessment–based adjustment, 20% (97.5% CI, 13%-30%; 21 events) for biomarker-based adjustment, and 15% (97.5% CI, 9%-25%; 16 events) for symptom-based adjustment. The hazard ratio for physician assessment–based adjustment vs biomarker-based adjustment was 1.2 (97.5% CI, 0.6-2.3). The hazard ratio for physician assessment–based adjustment vs symptom-based adjustment was 1.6 (97.5% CI, 0.8-3.3). Conclusion Among adults with mild to moderate persistent asthma controlled with low-dose inhaled corticosteroid therapy, the use of either biomarker-based or symptom-based adjustment of inhaled corticosteroids was not superior to physician assessment–based adjustment of inhaled corticosteroids in time to treatment failure. Trial Registration clinicaltrials.gov Identifier: NCT00495157 PMID:22968888
APPLICATION OF BIAS AND ADJUSTMENT TECHNIQUES TO THE ETA-CMAQ AIR QUALITY FORECAST
The current air quality forecast system, based on linking NOAA's Eta meteorological model with EPA's Community Multiscale Air Quality (CMAQ) model, consistently overpredicts surface ozone concentrations, but simulates its day-to-day variability quite well. The ability of bias cor...
The evaluation system of city's smart growth success rates
NASA Astrophysics Data System (ADS)
Huang, Yifan
2018-04-01
"Smart growth" is to pursue the best integrated perform+-ance of the Economically prosperous, socially Equitable, and Environmentally Sustainable(3E). Firstly, we establish the smart growth evaluation system(SGI) and the sustainable development evaluation system(SDI). Based on the ten principles and the definition of three E's of sustainability. B y using the Z-score method and the principal component analysis method, we evaluate and quantify indexes synthetically. Then we define the success of smart growth as the ratio of the SDI to the SGI composite score growth rate (SSG). After that we select two cities — Canberra and Durres as the objects of our model in view of the model. Based on the development plans and key data of these two cities, we can figure out the success of smart growth. And according to our model, we adjust some of the growth indicators for both cities. Then observe the results before and after adjustment, and finally verify the accuracy of the model.
Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model
NASA Technical Reports Server (NTRS)
Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.
2006-01-01
An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.
SensA: web-based sensitivity analysis of SBML models.
Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W
2014-10-01
SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.
Kristensen, P; Nordhagen, R; Wergeland, E; Bjerkedal, T
2008-08-01
Pregnant women at work have special needs, and sick leave is common. However, job adjustment in pregnancy is addressed in European legislation. Our main objective was to examine if job adjustment was associated with reduced absence. This study is based on the Norwegian Mother and Child Cohort Study (MoBa) conducted by the Norwegian Institute of Public Health. 28,611 employed women filled in questionnaires in weeks 17 and 30 in pregnancy. The risk of absence for more than 2 weeks was studied among those who were not absent in week 17 (n = 22,932), and the probability of return to work in week 30 among those who were absent in week 17 (n = 5679). Data were based on self-report. The influence of job adjustment (three categories: not needed, needed but not obtained, needed and obtained) was analysed in additive models in multivariable binomial regression. Associations with other job characteristics and work environment factors were also analysed. The risk of absence for more than 2 weeks was 0.308 and the probability of return to work was 0.137. Compared with women who needed but did not achieve job adjustment, obtained job adjustment was associated with a 0.107 decreased risk of absence (95% confidence interval 0.090 to 0.125) in a model including other job characteristics and work environment factors. Job adjustment was correspondingly associated with a 0.041 (0.023 to 0.059) increased probability of return to work. Absence was associated with adverse work environment, whereas the opposite pattern was found for return to work among those who started off being absent. Job adjustment was associated with reduced absence from work in pregnancy. Results should be interpreted cautiously because of low participation in MoBa and potential information bias from self-reported data.
NASA Technical Reports Server (NTRS)
Martin, C. F.; Oh, I. H.
1979-01-01
Range rate tracking of GEOS 3 through the ATS 6 satellite was used, along with ground tracking of GEOS 3, to estimate the geocentric gravitational constant (GM). Using multiple half day arcs, a GM of 398600.52 + or - 0.12 cu km/sq sec was estimated using the GEM 10 gravity model, based on speed of light of 299792.458 km/sec. Tracking station coordinates were simultaneously adjusted, leaving geopotential model error as the dominant error source. Baselines between the adjusted NASA laser sites show better than 15 cm agreement with multiple short arc GEOS 3 solutions.
Bower, Hannah; Andersson, Therese M-L; Crowther, Michael J; Dickman, Paul W; Lambe, Mats; Lambert, Paul C
2018-04-01
Expected or reference mortality rates are commonly used in the calculation of measures such as relative survival in population-based cancer survival studies and standardized mortality ratios. These expected rates are usually presented according to age, sex, and calendar year. In certain situations, stratification of expected rates by other factors is required to avoid potential bias if interest lies in quantifying measures according to such factors as, for example, socioeconomic status. If data are not available on a population level, information from a control population could be used to adjust expected rates. We have presented two approaches for adjusting expected mortality rates using information from a control population: a Poisson generalized linear model and a flexible parametric survival model. We used a control group from BCBaSe-a register-based, matched breast cancer cohort in Sweden with diagnoses between 1992 and 2012-to illustrate the two methods using socioeconomic status as a risk factor of interest. Results showed that Poisson and flexible parametric survival approaches estimate similar adjusted mortality rates according to socioeconomic status. Additional uncertainty involved in the methods to estimate stratified, expected mortality rates described in this study can be accounted for using a parametric bootstrap, but this might make little difference if using a large control population.
Influence of birth cohort on age of onset cluster analysis in bipolar I disorder.
Bauer, M; Glenn, T; Alda, M; Andreassen, O A; Angelopoulos, E; Ardau, R; Baethge, C; Bauer, R; Bellivier, F; Belmaker, R H; Berk, M; Bjella, T D; Bossini, L; Bersudsky, Y; Cheung, E Y W; Conell, J; Del Zompo, M; Dodd, S; Etain, B; Fagiolini, A; Frye, M A; Fountoulakis, K N; Garneau-Fournier, J; Gonzalez-Pinto, A; Harima, H; Hassel, S; Henry, C; Iacovides, A; Isometsä, E T; Kapczinski, F; Kliwicki, S; König, B; Krogh, R; Kunz, M; Lafer, B; Larsen, E R; Lewitzka, U; Lopez-Jaramillo, C; MacQueen, G; Manchia, M; Marsh, W; Martinez-Cengotitabengoa, M; Melle, I; Monteith, S; Morken, G; Munoz, R; Nery, F G; O'Donovan, C; Osher, Y; Pfennig, A; Quiroz, D; Ramesar, R; Rasgon, N; Reif, A; Ritter, P; Rybakowski, J K; Sagduyu, K; Scippa, A M; Severus, E; Simhandl, C; Stein, D J; Strejilevich, S; Hatim Sulaiman, A; Suominen, K; Tagata, H; Tatebayashi, Y; Torrent, C; Vieta, E; Viswanath, B; Wanchoo, M J; Zetin, M; Whybrow, P C
2015-01-01
Two common approaches to identify subgroups of patients with bipolar disorder are clustering methodology (mixture analysis) based on the age of onset, and a birth cohort analysis. This study investigates if a birth cohort effect will influence the results of clustering on the age of onset, using a large, international database. The database includes 4037 patients with a diagnosis of bipolar I disorder, previously collected at 36 collection sites in 23 countries. Generalized estimating equations (GEE) were used to adjust the data for country median age, and in some models, birth cohort. Model-based clustering (mixture analysis) was then performed on the age of onset data using the residuals. Clinical variables in subgroups were compared. There was a strong birth cohort effect. Without adjusting for the birth cohort, three subgroups were found by clustering. After adjusting for the birth cohort or when considering only those born after 1959, two subgroups were found. With results of either two or three subgroups, the youngest subgroup was more likely to have a family history of mood disorders and a first episode with depressed polarity. However, without adjusting for birth cohort (three subgroups), family history and polarity of the first episode could not be distinguished between the middle and oldest subgroups. These results using international data confirm prior findings using single country data, that there are subgroups of bipolar I disorder based on the age of onset, and that there is a birth cohort effect. Including the birth cohort adjustment altered the number and characteristics of subgroups detected when clustering by age of onset. Further investigation is needed to determine if combining both approaches will identify subgroups that are more useful for research. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
A general solution for the registration of optical multispectral scanners
NASA Technical Reports Server (NTRS)
Rader, M. L.
1974-01-01
The paper documents a general theory for registration (mapping) of data sets gathered by optical scanners such as the ERTS satellite MSS and the Skylab S-192 MSS. This solution is generally applicable to scanners which have rotating optics. Navigation data and ground control points are used in a statistically weighted adjustment based on a mathematical model of the dynamics of the spacecraft and the scanner system. This adjustment is very similar to the well known photogrammetric adjustments used in aerial mapping. Actual tests have been completed on NASA aircraft 24 channel MSS data, and the results are very encouraging.
Using risk-adjustment models to identify high-cost risks.
Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J
2003-11-01
We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.
Adolescent leisure dimensions, psychosocial adjustment, and gender effects.
Bradley, Graham L; Inglis, Brad C
2012-10-01
Leisure provides the context for much of adolescent behaviour and development. While both theory and research point to the benefits of participation in leisure activities that are highly structured, the association between structured leisure and psychosocial adjustment is not uniformly high. This paper presents a model of adolescent leisure comprising three dimensions: structure, effort, and social contact. Adolescent adjustment is hypothesized to increase with participation in activities characterized by each of these attributes. Adjustment is also predicted to vary with gender, and with the interaction of gender and leisure participation. These propositions were tested in a questionnaire-based study of 433 Australian adolescents. Results revealed majority support for hypotheses pertaining to the positive effects of the leisure dimensions, and for gender differences in leisure participation and adjustment. Evidence was also obtained of gender-differentiated effects of leisure on adjustment, with social leisure predicting adjustment more strongly in females than males. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Development of mathematical models of environmental physiology
NASA Technical Reports Server (NTRS)
Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.
1971-01-01
Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.
NASA Astrophysics Data System (ADS)
Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long
2017-09-01
This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
Relevance of the c-statistic when evaluating risk-adjustment models in surgery.
Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y
2012-05-01
The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.
Worsøe, Päivi S; Sangild, Per T; van Goudoever, Johannes B; Koletzko, Berthold; van der Beek, Eline M; Abrahamse-Berkeveld, Marieke; Burrin, Douglas G; van de Heijning, Bert J M; Thymann, Thomas
2018-06-13
Current recommendations for protein levels in infant formula are intended to ensure that growth matches or exceeds growth of breastfed infants, but may provide a surplus of amino acids (AAs). Recent infant studies with AA-based formulas support specific adjustment of the essential amino acid (EAA) composition allowing for potential lowering of total protein levels. With the use of a combination of intact protein and free EAAs, we designed a formula that meets these adjusted EAA requirements for infants. Our objective was to test whether this adjusted formula is safe and supports growth in a protein-restricted piglet model, and whether it shows better growth than an isonitrogenous formula based on free AAs. Term piglets (Landrace × Yorkshire × Duroc, n = 72) were fed 1 of 4 isoenergetic formulas containing 70% intact protein and 30% of an EAA mixture or a complete AA-based control for 20 d: standard formula (ST-100), ST-100 with 25% reduction in proteinaceous nitrogen (ST-75), ST-75 with an adjusted EAA composition (O-75), or a diet as O-75, given as a complete AA-based diet (O-75AA). After an initial adaptation period, ST-75 and O-75 pigs showed similar growth rates, both lower than ST-100 pigs (∼25 compared with 31 g · kg-1 · d-1, respectively). The O-75AA pigs showed further reduced growth rate (15 g · kg-1 · d-1) and fat proportion (both P < 0.05, relative to O-75). Formula based partly on intact protein is superior to AA-based formula in this experimental setting. The 25% lower, but EAA-adjusted, partially intact protein-based formula resulted in similar weight gain with a concomitant increased AA catabolism, compared with the standard 25% lower standard formula in artificially reared, protein-restricted piglets. Further studies should investigate if and how the specific EAA adjustments that allow for lowering of total protein levels will affect growth and body composition development in formula-fed infants.
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Björ, Ove; Damber, Lena; Jonsson, Håkan; Nilsson, Tohr
2015-07-01
Iron-ore miners are exposed to extremely dusty and physically arduous work environments. The demanding activities of mining select healthier workers with longer work histories (ie, the Healthy Worker Survivor Effect (HWSE)), and could have a reversing effect on the exposure-response association. The objective of this study was to evaluate an iron-ore mining cohort to determine whether the effect of respirable dust was confounded by the presence of an HWSE. When an HWSE exists, standard modelling methods, such as Cox regression analysis, produce biased results. We compared results from g-estimation of accelerated failure-time modelling adjusted for HWSE with corresponding unadjusted Cox regression modelling results. For all-cause mortality when adjusting for the HWSE, cumulative exposure from respirable dust was associated with a 6% decrease of life expectancy if exposed ≥15 years, compared with never being exposed. Respirable dust continued to be associated with mortality after censoring outcomes known to be associated with dust when adjusting for the HWSE. In contrast, results based on Cox regression analysis did not support that an association was present. The adjustment for the HWSE made a difference when estimating the risk of mortality from respirable dust. The results of this study, therefore, support the recommendation that standard methods of analysis should be complemented with structural modelling analysis techniques, such as g-estimation of accelerated failure-time modelling, to adjust for the HWSE. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Alali, Aziz S; Naimark, David M J; Wilson, Jefferson R; Fowler, Robert A; Scales, Damon C; Golan, Eyal; Mainprize, Todd G; Ray, Joel G; Nathens, Avery B
2014-10-01
Decompressive craniectomy and barbiturate coma are often used as second-tier strategies when intracranial hypertension following severe traumatic brain injury is refractory to first-line treatments. Uncertainty surrounds the decision to choose either treatment option. We investigated which strategy is more economically attractive in this context. We performed a cost-utility analysis. A Markov Monte Carlo microsimulation model with a life-long time horizon was created to compare quality-adjusted survival and cost of the two treatment strategies, from the perspective of healthcare payer. Model parameters were estimated from the literature. Two-dimensional simulation was used to incorporate parameter uncertainty into the model. Value of information analysis was conducted to identify major drivers of decision uncertainty and focus future research. Trauma centers in the United States. Base case was a population of patients (mean age = 25 yr) who developed refractory intracranial hypertension following traumatic brain injury. We compared two treatment strategies: decompressive craniectomy and barbiturate coma. Decompressive craniectomy was associated with an average gain of 1.5 quality-adjusted life years relative to barbiturate coma, with an incremental cost-effectiveness ratio of $9,565/quality-adjusted life year gained. Decompressive craniectomy resulted in a greater quality-adjusted life expectancy 86% of the time and was more cost-effective than barbiturate coma in 78% of cases if our willingness-to-pay threshold is $50,000/quality-adjusted life year and 82% of cases at a threshold of $100,000/quality-adjusted life year. At older age, decompressive craniectomy continued to increase survival but at higher cost (incremental cost-effectiveness ratio = $197,906/quality-adjusted life year at mean age = 85 yr). Based on available evidence, decompressive craniectomy for the treatment of refractory intracranial hypertension following traumatic brain injury provides better value in terms of costs and health gains than barbiturate coma. However, decompressive craniectomy might be less economically attractive for older patients. Further research, particularly on natural history of severe traumatic brain injury patients, is needed to make more informed treatment decisions.
Delahanty, Ryan J; Kaufman, David; Jones, Spencer S
2018-06-01
Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.
Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia
2014-01-01
Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387
Comparison between two photovoltaic module models based on transistors
NASA Astrophysics Data System (ADS)
Saint-Eve, Frédéric; Sawicki, Jean-Paul; Petit, Pierre; Maufay, Fabrice; Aillerie, Michel
2018-05-01
The main objective of this paper is to verify the possibility to reduce to a simple electronic circuit with very few components the behavior simulation of an un-shaded photovoltaic (PV) module. Particularly, two models based on well-tried elementary structures, i.e., the Darlington structure in first model and the voltage regulation with programmable Zener diode in the second are analyzed. Specifications extracted from the behavior of a real I-V characteristic of a panel are considered and the principal electrical variables are deduced. The two models are expected to match with open circuit voltage, maximum power point (MPP) and short circuit current, without forgetting realistic current slopes on the both sides of MPP. The robustness is mentioned when irradiance varies and is considered as an additional fundamental property. For both models, two simulations are done to identify influence of some parameters. In the first model, a parameter allowing to adjust current slope on left side of MPP proves to be also important for the calculation of open circuit voltage. Besides this model does not authorize an entirely adjustment of I-V characteristic and MPP moves significantly away from real value when irradiance increases. On the contrary, the second model seems to have only qualities: open circuit voltage is easy to calculate, current slopes are realistic and there is perhaps a good robustness when irradiance variations are simulated by adjusting short circuit current of PV module. We have shown that these two simplified models are expected to make reliable and easier simulations of complex PV architecture integrating many different devices like PV modules or other renewable energy sources and storage capacities coupled in parallel association.
Tangir, Gali; Dekel, Rachel; Lavi, Tamar; Gewirtz, Abigail H; Zamir, Osnat
2017-08-01
This study explored the behavioral and emotional adjustment of Israeli school-age children who are exposed to political violence. Based on Bronfenbrenner's (1986) ecological model and ecological model of psychosocial trauma (Harvey, 2007), we examined the direct contribution of exposure, gender, maternal characteristics (mother's posttraumatic stress symptoms [PTSS], maternal care and maternal control), and community type (development town vs. kibbutz), to school-age children's adjustment. In addition, we examined whether maternal characteristics and community type moderated the association between exposure and adjustment. There were 121 mother-child dyads from the development town of Sderot (n = 62) and from the surrounding kibbutzim (n = 58) participated. Revealed that being a boy, living in Sderot, and mothers' higher PTSS, contributed directly to children's total difficulties (i.e., externalizing and internalizing problems), and that maternal control moderated the association between personal exposure and children's total difficulties. Furthermore, being a girl and mother's higher PTSS and higher maternal control contributed directly to children's PTSS. Mother's PTSS moderated the association between personal exposure and children's PTSS. Maternal care was not associated with children's adjustment. Both the child's gender and the type of community in which he or she lives are associated with maternal distress and children's adjustment to political violence. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Duthie, A. Bradley; Bocedi, Greta; Reid, Jane M.
2016-01-01
Polyandry is often hypothesized to evolve to allow females to adjust the degree to which they inbreed. Multiple factors might affect such evolution, including inbreeding depression, direct costs, constraints on male availability, and the nature of polyandry as a threshold trait. Complex models are required to evaluate when evolution of polyandry to adjust inbreeding is predicted to arise. We used a genetically explicit individual‐based model to track the joint evolution of inbreeding strategy and polyandry defined as a polygenic threshold trait. Evolution of polyandry to avoid inbreeding only occurred given strong inbreeding depression, low direct costs, and severe restrictions on initial versus additional male availability. Evolution of polyandry to prefer inbreeding only occurred given zero inbreeding depression and direct costs, and given similarly severe restrictions on male availability. However, due to its threshold nature, phenotypic polyandry was frequently expressed even when strongly selected against and hence maladaptive. Further, the degree to which females adjusted inbreeding through polyandry was typically very small, and often reflected constraints on male availability rather than adaptive reproductive strategy. Evolution of polyandry solely to adjust inbreeding might consequently be highly restricted in nature, and such evolution cannot necessarily be directly inferred from observed magnitudes of inbreeding adjustment. PMID:27464756
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... parts of the risk adjustment process--the risk adjustment model, the calculation of plan average... risk adjustment process. The risk adjustment model calculates individual risk scores. The calculation...'' to mean all data that are used in a risk adjustment model, the calculation of plan average actuarial...
NASA Astrophysics Data System (ADS)
Wang, Wu; Huang, Wei; Zhang, Yongjun
2018-03-01
The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
Development and evaluation of a biomedical search engine using a predicate-based vector space model.
Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey
2013-10-01
Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.
We describe a seagrass growth (SGG) model that is coupled to a water quality (WQ) model that includes the effects of phytoplankton (chlorophyll), colored dissolved organic matter (CDOM) and suspended solids (TSS) on water clarity. Phytoplankton growth was adjusted daily for PAR (...
ERIC Educational Resources Information Center
McRae, Elizabeth M.; Stoppelbein, Laura; O'Kelley, Sarah E.; Fite, Paula; Greening, Leilani
2018-01-01
Parental adjustment, parenting behaviors, and child routines have been linked to internalizing and externalizing child behavior. The purpose of the present study was to evaluate a comprehensive model examining relations among these variables in children with ASD and their parents. Based on Sameroff's Transactional Model of Development (Sameroff…
USDA-ARS?s Scientific Manuscript database
Classical, one-dimensional, mobile bed, sediment-transport models simulate vertical channel adjustment, raising or lowering cross-section node elevations to simulate erosion or deposition. This approach does not account for bank erosion processes including toe scour and mass failure. In many systems...
Acute health impacts of airborne particles estimated from satellite remote sensing.
Wang, Zhaoxi; Liu, Yang; Hu, Mu; Pan, Xiaochuan; Shi, Jing; Chen, Feng; He, Kebin; Koutrakis, Petros; Christiani, David C
2013-01-01
Satellite-based remote sensing provides a unique opportunity to monitor air quality from space at global, continental, national and regional scales. Most current research focused on developing empirical models using ground measurements of the ambient particulate. However, the application of satellite-based exposure assessment in environmental health is still limited, especially for acute effects, because the development of satellite PM(2.5) model depends on the availability of ground measurements. We tested the hypothesis that MODIS AOD (aerosol optical depth) exposure estimates, obtained from NASA satellites, are directly associated with daily health outcomes. Three independent healthcare databases were used: unscheduled outpatient visits, hospital admissions, and mortality collected in Beijing metropolitan area, China during 2006. We use generalized linear models to compare the short-term effects of air pollution assessed by ground monitoring (PM(10)) with adjustment of absolute humidity (AH) and AH-calibrated AOD. Across all databases we found that both AH-calibrated AOD and PM(10) (adjusted by AH) were consistently associated with elevated daily events on the current day and/or lag days for cardiovascular diseases, ischemic heart diseases, and COPD. The relative risks estimated by AH-calibrated AOD and PM(10) (adjusted by AH) were similar. Additionally, compared to ground PM(10), we found that AH-calibrated AOD had narrower confidence intervals for all models and was more robust in estimating the current day and lag day effects. Our preliminary findings suggested that, with proper adjustment of meteorological factors, satellite AOD can be used directly to estimate the acute health impacts of ambient particles without prior calibrating to the sparse ground monitoring networks. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pancreatic β-Cell Function and Prognosis of Nondiabetic Patients With Ischemic Stroke.
Pan, Yuesong; Chen, Weiqi; Jing, Jing; Zheng, Huaguang; Jia, Qian; Li, Hao; Zhao, Xingquan; Liu, Liping; Wang, Yongjun; He, Yan; Wang, Yilong
2017-11-01
Pancreatic β-cell dysfunction is an important factor in the development of type 2 diabetes mellitus. This study aimed to estimate the association between β-cell dysfunction and prognosis of nondiabetic patients with ischemic stroke. Patients with ischemic stroke without a history of diabetes mellitus in the ACROSS-China (Abnormal Glucose Regulation in Patients with Acute Stroke across China) registry were included. Disposition index was estimated as computer-based model of homeostatic model assessment 2-β%/homeostatic model assessment 2-insulin resistance based on fasting C-peptide level. Outcomes included stroke recurrence, all-cause death, and dependency (modified Rankin Scale, 3-5) at 12 months after onset. Among 1171 patients, 37.2% were women with a mean age of 62.4 years. At 12 months, 167 (14.8%) patients had recurrent stroke, 110 (9.4%) died, and 184 (16.0%) had a dependency. The first quartile of the disposition index was associated with an increased risk of stroke recurrence (adjusted hazard ratio, 3.57; 95% confidence interval, 2.13-5.99) and dependency (adjusted hazard ratio, 2.30; 95% confidence interval, 1.21-4.38); both the first and second quartiles of the disposition index were associated with an increased risk of death (adjusted hazard ratio, 5.09; 95% confidence interval, 2.51-10.33; adjusted hazard ratio, 2.42; 95% confidence interval, 1.17-5.03) compared with the fourth quartile. Using a multivariable regression model with restricted cubic spline, we observed an L-shaped association between the disposition index and the risk of each end point. In this large-scale registry, β-cell dysfunction was associated with an increased risk of 12-month poor prognosis in nondiabetic patients with ischemic stroke. © 2017 American Heart Association, Inc.
Langhammer, Birgitta; Stanghelle, Johan K
2011-06-01
The primary aim of the present study was to investigate, based on data from our study in 2000, whether the Bobath approach enhanced quality of movement better than the Motor Relearning Programme (MRP) during rehabilitation of stroke patients. A randomized controlled stratified trial of acute stroke patients. The patients were treated according to Motor Relearning Programme and Bobath approach and assessed with Motor Assessment Scale, Sødring Motor Evaluation Scale, Nottingham Health Profile and the Barthel Index. A triangulation of the test scores was made in reference to the Movement Quality Model and biomechanical, physiological, psycho-socio-cultural, and existential themes. The items arm (p = 0.02-0.04) sitting (p = 0.04) and hand (p = 0.01-0.03) were significantly better in the Motor Relearning Programme group than in the Bobath group, in both Sødring Motor Evaluation Scale and Motor Assessment Scale. Leg function, balance, transfer, walking and stair climbing did not differ between the groups. The Movement Quality Model and the movement qualities biomechanical, physiological and psycho-socio-cultural showed higher scoring in the Motor Relearning Programme group, indicating better quality of movement in all items. Regression models established the relationship with significant models of motor performance and self reported physical mobility (adjusted R(2) 0.30-0.68, p < 0.0001), energy (adjusted R(2) 0.13-0.14, p = 0.03-0.04, emotion (adjusted R(2) 0.30-0.38, p < 0.0001) and social interaction (arm function, adjusted R(2) 0.25, p = 0.0001). These analyses confirm that task oriented exercises of the Motor Relearning Programme type are preferable regarding quality of movement in the acute rehabilitation of patients with stroke. Copyright © 2010 John Wiley & Sons, Ltd.
Elizur, Y; Ziv, M
2001-01-01
While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.
Kato, Koki; Fukuda, Haruhisa
2017-11-01
To quantify the difference between adjusted costs for home-based palliative care and hospital-based palliative care in terminally ill cancer patients. We carried out a case-control study of home-care patients (cases) who had died at home between January 2009 and December 2013, and hospital-care patients (controls) who had died at a hospital between April 2008 and December 2013. Data on patient characteristics were obtained from insurance claims data and medical records. We identified the determinants of home care using a multivariate logistic regression analysis. Cox proportional hazards analysis was used to examine treatment duration in both types of care, and a generalized linear model was used to estimate the reduction in treatment costs associated with home care. The case and control groups comprised 48 and 99 patients, respectively. Home care was associated with one or more person(s) living with the patient (adjusted OR 6.54, 95% CI 1.18-36.05), required assistance for activities of daily living (adjusted OR 3.61, 95% CI 1.12-10.51), non-use of oxygen inhalation therapy (adjusted OR 12.75, 95% CI 3.53-46.02), oral or suppository opioid use (adjusted OR 5.74, 95% CI 1.11-29.54) and transdermal patch opioid use (adjusted OR 8.30, 95% CI 1.97-34.93). The adjusted hazard ratio of home care for treatment duration was not significant (adjusted OR 0.95, 95% CI 0.59-1.53). However, home care was significantly associated with a reduction of $7523 (95% CI $7093-7991, P = 0.015) in treatment costs. Despite similar treatment durations between the groups, treatment costs were substantially lower in the home-care group. These findings might inform the policymaking process for improving the home-care support system. Geriatr Gerontol Int 2017; 17: 2247-2254. © 2017 Japan Geriatrics Society.
A perturbative approach for enhancing the performance of time series forecasting.
de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C
2017-04-01
This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoel, D.D.
1984-01-01
Two computer codes have been developed for operational use in performing real time evaluations of atmospheric releases from the Savannah River Plant (SRP) in South Carolina. These codes, based on mathematical models, are part of the SRP WIND (Weather Information and Display) automated emergency response system. Accuracy of ground level concentrations from a Gaussian puff-plume model and a two-dimensional sequential puff model are being evaluated with data from a series of short range diffusion experiments using sulfur hexafluoride as a tracer. The models use meteorological data collected from 7 towers on SRP and at the 300 m WJBF-TV tower aboutmore » 15 km northwest of SRP. The winds and the stability, which is based on turbulence measurements, are measured at the 60 m stack heights. These results are compared to downwind concentrations using only standard meteorological data, i.e., adjusted 10 m winds and stability determined by the Pasquill-Turner stability classification method. Scattergrams and simple statistics were used for model evaluations. Results indicate predictions within accepted limits for the puff-plume code and a bias in the sequential puff model predictions using the meteorologist-adjusted nonstandard data. 5 references, 4 figures, 2 tables.« less
Van Ryzin, Mark J; Gravely, Amy A; Roseth, Cary J
2009-01-01
Self-determination theory emphasizes the importance of school-based autonomy and belongingness to academic achievement and psychological adjustment, and the theory posits a model in which engagement in school mediates the influence of autonomy and belongingness on these outcomes. To date, this model has only been evaluated on academic outcomes. Utilizing short-term longitudinal data (5-month timeframe) from a set of secondary schools in the rural Midwest (N = 283, M age = 15.3, 51.9% male, 86.2% White), we extend the model to include a measure of positive adjustment (i.e., hope). We also find a direct link between peer-related belongingness (i.e., peer support) and positive adjustment that is not mediated by engagement in school. A reciprocal relationship between academic autonomy, teacher-related belongingness (i.e., teacher support) and engagement in learning is supported, but this reciprocal relationship does not extend to peer-related belongingness. The implications of these findings for secondary schools are discussed.
NASA Technical Reports Server (NTRS)
Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino
1993-01-01
We have constructed a series of models of Alpha Centauri A and Alpha Centauri B for the purposes of testing the effects of convection modeling both by means of the mixing-length theory (MLT), and by means of parameterization of energy fluxes based upon numerical simulations of turbulent compressible convection. We demonstrate that while MLT, through its adjustable parameter alpha, can be used to match any given values of luminosities and radii, our treatment of convection, which lacks any adjustable parameters, makes specific predictions of stellar radii. Since the predicted radii of the Alpha Centauri system fall within the errors of the observed radii, our treatment of convection is applicable to other stars in the H-R diagram in addition to the sun. A second set of models is constructed using MLT, adjusting alpha to yield not the 'measured' radii but, instead, the radii predictions of our revised treatment of convection. We conclude by assessing the appropriateness of using a single value of alpha to model a wide variety of stars.
H. Li; X. Deng; Andy Dolloff; E. P. Smith
2015-01-01
A novel clustering method for bivariate functional data is proposed to group streams based on their waterâair temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...
Jang, Miyoung; Kim, Jiyoung
2018-04-01
Prospective studies have examined factors directly affecting psychosocial adjustment during breast cancer treatment. Survivorship stage may moderate a direct effect of stress on psychosocial adjustment. This study aimed to examine relationships between stress, social support, self-efficacy, coping, and psychosocial adjustment to construct a model of the effect pathways between those factors, and determine if survivorship stage moderates those effects. Six hundred people with breast cancer completed questionnaires. Examined stages of survivorship after treatment were as follows: acute (i.e., <2 years), extended (2-5 years), and lasting (>5 years). Stress (Perceived Stress Scale), social support (Multidimensional Scale of Perceived Social Support), self-efficacy (New General Self Efficacy Scale), coping (Ways of Coping Checklist), and psychosocial adjustment (Psychosocial Adjustment to Illness Scale-Self-Report-Korean Version) were measured. Self-efficacy significantly correlated with psychosocial adjustment in the acute survival stage (γ = -0.37, P < .001). Stress inversely correlated with coping only in the extended survival stage (γ = -0.56, P < .001). Social support's benefit to psychosocial adjustment was greater in the acute (γ = -0.42, P < .001) and extended survival stages (γ = -0.56, P < .001) than in the lasting survival stage. Stress's negative correlation with psychosocial adjustment was stronger in the lasting survival stage (β = 0.42, P < .001) than in the acute survival stage. Based on these results, it is necessary to improve self-efficacy, social support, and manage stress according to survival stage for psychosocial adjustment of female breast cancer patients. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evolving Concepts on Adjusting Human Resting Energy Expenditure Measurements for Body Size
Heymsfield, Steven B.; Thomas, Diana; Bosy-Westphal, Anja; Shen, Wei; Peterson, Courtney M.; Müller, Manfred J.
2012-01-01
Establishing if an adult’s resting energy expenditure (REE) is high or low for their body size is a pervasive question in nutrition research. Early workers applied body mass and height as size measures and formulated the Surface Law and Kleiber’s Law, although each has limitations when adjusting REE. Body composition methods introduced during the mid-twentieth century provided a new opportunity to identify metabolically homogeneous “active” compartments. These compartments all show improved correlations with REE estimates over body mass-height approaches, but collectively share a common limitation: REE-body composition ratios are not “constant” but vary across men and women and with race, age, and body size. The now-accepted alternative to ratio-based norms is to adjust for predictors by applying regression models to calculate “residuals” that establish if a REE is relatively high or low. The distinguishing feature of statistical REE-body composition models is a “non-zero” intercept of unknown origin. The recent introduction of imaging methods has allowed development of physiological tissue-organ based REE prediction models. Herein we apply these imaging methods to provide a mechanistic explanation, supported by experimental data, for the non-zero intercept phenomenon and in that context propose future research directions for establishing between subject differences in relative energy metabolism. PMID:22863371
Challenges of Electronic Medical Surveillance Systems
2004-06-01
More sophisticated approaches, such as regression models and classical autoregressive moving average ( ARIMA ) models that make estimates based on...with those predicted by a mathematical model . The primary benefit of ARIMA models is their ability to correct for local trends in the data so that...works well, for example, during a particularly severe flu season, where prolonged periods of high visit rates are adjusted to by the ARIMA model , thus
Lee, Jane J.; Yin, Xiaoyan; Hoffmann, Udo; Fox, Caroline S.; Benjamin, Emelia J.
2016-01-01
Obesity is associated with increased risk of developing atrial fibrillation (AF). Different fat depots may have differential associations with cardiac pathology. We examined the longitudinal associations between pericardial, intrathoracic, and visceral fat with incident AF. We studied Framingham Heart Study Offspring and Third Generation Cohorts who participated in the multi-detector computed tomography sub-study examination 1. We constructed multivariable-adjusted Cox proportional hazard models for risk of incident AF. Body mass index (BMI) was included in the multivariable-adjusted model as a secondary adjustment. We included 2,135 participants (53.3% women; mean age 58.8 years). During a median follow-up of 9.7 years, we identified 162 cases of incident AF. Across the increasing tertiles of pericardial fat volume, age- and sex-adjusted incident AF rate per 1000 person-years of follow-up were 8.4, 7.5, and 10.2. Based on an age- and sex-adjusted model, greater pericardial fat [hazard ratio (HR) 1.17, 95% confidence interval (CI) 1.03-1.34] and intrathoracic fat (HR 1.24, 95% CI 1.06-1.45) were associated with increased risk of incident AF. The HRs (95% CI) for incident AF were 1.13 (0.99-1.30) for pericardial fat, 1.19 (1.01-1.40) for intrathoracic fat, and 1.09 (0.93-1.28) for abdominal visceral fat after multivariable adjustment. After additional adjustment of BMI, none of the associations remained significant (all p>0.05). Our findings suggest that cardiac ectopic fat depots may share common risk factors with AF, which may have led to a lack of independence in the association between pericardial fat with incident AF. PMID:27666172
The effects of coping on adjustment: Re-examining the goodness of fit model of coping effectiveness.
Masel, C N; Terry, D J; Gribble, M
1996-01-01
Abstract The primary aim of the present study was to examine the extent to which the effects of coping on adjustment are moderated by levels of event controllability. Specifically, the research tested two revisions to the goodness of fit model of coping effectiveness. First, it was hypothesized that the effects of problem management coping (but not problem appraisal coping) would be moderated by levels of event controllability. Second, it was hypothesized that the effects of emotion-focused coping would be moderated by event controllability, but only in the acute phase of a stressful encounter. To test these predictions, a longitudinal study was undertaken (185 undergraduate students participated in all three stages of the research). Measures of initial adjustment (low depression and coping efficacy) were obtained at Time 1. Four weeks later (Time 2), coping responses to a current or a recent stressor were assessed. Based on subjects' descriptions of the event, objective and subjective measures of event controllability were also obtained. Measures of concurrent and subsequent adjustment were obtained at Times 2 and 3 (two weeks later), respectively. There was only weak support for the goodness of fit model of coping effectiveness. The beneficial effects of a high proportion of problem management coping (relative to total coping efforts) on Time 3 perceptions of coping efficacy were more evident in high control than in low control situations. Other results of the research revealed that, irrespective of the controllability of the event, problem appraisal coping strategies and emotion-focused strategies (escapism and self-denigration) were associated with high and low levels of concurrent adjustment, respectively. The effects of these coping responses on subsequent adjustment were mediated through concurrent levels of adjustment.
ERIC Educational Resources Information Center
Cooper, Kelt L.
2011-01-01
One major problem in developing school district budgets immune to cuts is the model administrators traditionally use--an expenditure model. The simplicity of this model is seductive: What were the revenues and expenditures last year? What are the expected revenues and expenditures this year? A few adjustments here and there and one has a budget.…
Sun, Kainan; Field, R William; Steck, Daniel J
2010-01-01
The quantitative relationships between radon gas concentration, the surface-deposited activities of various radon progeny, the airborne radon progeny dose rate, and various residential environmental factors were investigated through a Monte Carlo simulation study based on the extended Jacobi room model. Airborne dose rates were calculated from the unattached and attached potential alpha-energy concentrations (PAECs) using two dosimetric models. Surface-deposited (218)Po and (214)Po were significantly correlated with radon concentration, PAECs, and airborne dose rate (p-values <0.0001) in both non-smoking and smoking environments. However, in non-smoking environments, the deposited radon progeny were not highly correlated to the attached PAEC. In multiple linear regression analysis, natural logarithm transformation was performed for airborne dose rate as a dependent variable, as well as for radon and deposited (218)Po and (214)Po as predictors. In non-smoking environments, after adjusting for the effect of radon, deposited (214)Po was a significant positive predictor for one dose model (RR 1.46, 95% CI 1.27-1.67), while deposited (218)Po was a negative predictor for the other dose model (RR 0.90, 95% CI 0.83-0.98). In smoking environments, after adjusting for radon and room size, deposited (218)Po was a significant positive predictor for one dose model (RR 1.10, 95% CI 1.02-1.19), while a significant negative predictor for the other model (RR 0.90, 95% CI 0.85-0.95). After adjusting for radon and deposited (218)Po, significant increases of 1.14 (95% CI 1.03-1.27) and 1.13 (95% CI 1.05-1.22) in the mean dose rates were found for large room sizes relative to small room sizes in the different dose models.
Essays in applied macroeconomics: Asymmetric price adjustment, exchange rate and treatment effect
NASA Astrophysics Data System (ADS)
Gu, Jingping
This dissertation consists of three essays. Chapter II examines the possible asymmetric response of gasoline prices to crude oil price changes using an error correction model with GARCH errors. Recent papers have looked at this issue. Some of these papers estimate a form of error correction model, but none of them accounts for autoregressive heteroskedasticity in estimation and testing for asymmetry and none of them takes the response of crude oil price into consideration. We find that time-varying volatility of gasoline price disturbances is an important feature of the data, and when we allow for asymmetric GARCH errors and investigate the system wide impulse response function, we find evidence of asymmetric adjustment to crude oil price changes in weekly retail gasoline prices. Chapter III discusses the relationship between fiscal deficit and exchange rate. Economic theory predicts that fiscal deficits can significantly affect real exchange rate movements, but existing empirical evidence reports only a weak impact of fiscal deficits on exchange rates. Based on US dollar-based real exchange rates in G5 countries and a flexible varying coefficient model, we show that the previously documented weak relationship between fiscal deficits and exchange rates may be the result of additive specifications, and that the relationship is stronger if we allow fiscal deficits to impact real exchange rates non-additively as well as nonlinearly. We find that the speed of exchange rate adjustment toward equilibrium depends on the state of the fiscal deficit; a fiscal contraction in the US can lead to less persistence in the deviation of exchange rates from fundamentals, and faster mean reversion to the equilibrium. Chapter IV proposes a kernel method to deal with the nonparametric regression model with only discrete covariates as regressors. This new approach is based on recently developed least squares cross-validation kernel smoothing method. It can not only automatically smooth the irrelevant variables out of the nonparametric regression model, but also avoid the problem of loss of efficiency related to the traditional nonparametric frequency-based method and the problem of misspecification based on parametric model.
Pignata, Maud; Chouaid, Christos; Le Lay, Katell; Luciani, Laura; McConnachie, Ceilidh; Gordon, James; Roze, Stéphane
2017-01-01
Background and aims Lung cancer has the highest mortality rate of all cancers worldwide. Non-small-cell lung cancer (NSCLC) accounts for 85% of all lung cancers and has an extremely poor prognosis. Afatinib is an irreversible ErbB family blocker designed to suppress cellular signaling and inhibit cellular growth and is approved in Europe after platinum-based therapy for squamous NSCLC. The objective of the present analysis was to evaluate the cost-effectiveness of afatinib after platinum-based therapy for squamous NSCLC in France. Methods The study population was based on the LUX-Lung 8 trial that compared afatinib with erlotinib in patients with squamous NSCLC. The analysis was performed from the perspective of all health care funders and affected patients. A partitioned survival model was developed to evaluate cost-effectiveness based on progression-free survival and overall survival in the trial. Life expectancy, quality-adjusted life expectancy and direct costs were evaluated over a 10-year time horizon. Future costs and clinical benefits were discounted at 4% annually. Deterministic and probabilistic sensitivity analyses were performed. Results Model projections indicated that afatinib was associated with greater life expectancy (0.16 years) and quality-adjusted life expectancy (0.094 quality-adjusted life years [QALYs]) than that projected for erlotinib. The total cost of treatment over a 10-year time horizon was higher for afatinib than erlotinib, EUR12,364 versus EUR9,510, leading to an incremental cost-effectiveness ratio of EUR30,277 per QALY gained for afatinib versus erlotinib. Sensitivity analyses showed that the base case findings were stable under variation of a range of model inputs. Conclusion Based on data from the LUX-Lung 8 trial, afatinib was projected to improve clinical outcomes versus erlotinib, with a 97% probability of being cost-effective assuming a willingness to pay of EUR70,000 per QALY gained, after platinum-based therapy in patients with squamous NSCLC in France. PMID:29123418
NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia
NASA Astrophysics Data System (ADS)
Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas
2016-04-01
Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.
Research on air and missile defense task allocation based on extended contract net protocol
NASA Astrophysics Data System (ADS)
Zhang, Yunzhi; Wang, Gang
2017-10-01
Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.
Potential human health risk from chemical exposure must often be assessed for conditions for which suitable human or animal data are not available, requiring extrapolation across duration and concentration. The default method for exposure-duration adjustment is based on Haber's r...
A Replicable, Zero-Based Model for Marketing Curriculum Innovation
ERIC Educational Resources Information Center
Borin, Norm; Metcalf, Lynn E.; Tietje, Brian C.
2007-01-01
As university curriculums inevitably change, their evolution typically occurs through a series of minor incremental adjustments to individual courses that cause the curriculum to lose strategic consistency and focus. This article demonstrates a zero-based approach to marketing curriculum innovation. The authors describe forces of change that led…
Medium-term electric power demand forecasting based on economic-electricity transmission model
NASA Astrophysics Data System (ADS)
Li, Wenfeng; Bao, Fangmin; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Mao, Yubin; Wang, Jiangbo; Liu, Junhui
2018-06-01
Electric demand forecasting is a basic work to ensure the safe operation of power system. Based on the theories of experimental economics and econometrics, this paper introduces Prognoz Platform 7.2 intelligent adaptive modeling platform, and constructs the economic electricity transmission model that considers the economic development scenarios and the dynamic adjustment of industrial structure to predict the region's annual electricity demand, and the accurate prediction of the whole society's electricity consumption is realized. Firstly, based on the theories of experimental economics and econometrics, this dissertation attempts to find the economic indicator variables that drive the most economical growth of electricity consumption and availability, and build an annual regional macroeconomic forecast model that takes into account the dynamic adjustment of industrial structure. Secondly, it innovatively put forward the economic electricity directed conduction theory and constructed the economic power transfer function to realize the group forecast of the primary industry + rural residents living electricity consumption, urban residents living electricity, the second industry electricity consumption, the tertiary industry electricity consumption; By comparing with the actual value of economy and electricity in Henan province in 2016, the validity of EETM model is proved, and the electricity consumption of the whole province from 2017 to 2018 is predicted finally.
Modular Bundle Adjustment for Photogrammetric Computations
NASA Astrophysics Data System (ADS)
Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.
2018-05-01
In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Early parental adjustment and bereavement after childhood cancer death.
Barrera, Maru; O'Connor, Kathleen; D'Agostino, Norma Mammone; Spencer, Lynlee; Nicholas, David; Jovcevska, Vesna; Tallet, Susan; Schneiderman, Gerald
2009-07-01
This study comprehensively explored parental bereavement and adjustment at 6 months post-loss due to childhood cancer. Interviews were conducted with 18 mothers and 13 fathers. Interviews were transcribed verbatim and analyzed based on qualitative methodology. A model describing early parental bereavement and adaptation emerged with 3 domains: (1) Perception of the Child, describing bereavement and adjustment prior to and after the loss; (2) Perception of Others, including relationships with partners, surviving children, and their social network; and (3) Perception of the World, exploring parents' perceived meanings of the experience in the context of their worldview. Domains are illustrated by quotes. Profiles of parental bereavement emerged.
Time variation of effective climate sensitivity in GCMs
NASA Astrophysics Data System (ADS)
Williams, K. D.; Ingram, W. J.; Gregory, J. M.
2009-04-01
Effective climate sensitivity is often assumed to be constant (if uncertain), but some previous studies of General Circulation Model (GCM) simulations have found it varying as the simulation progresses. This complicates the fitting of simple models to such simulations, as well as having implications for the estimation of climate sensitivity from observations. This study examines the evolution of the feedbacks determining the climate sensitivity in GCMs submitted to the Coupled Model Intercomparison Project. Apparent centennial-timescale variations of effective climate sensitivity during stabilisation to a forcing can be considered an artefact of using conventional forcings which only allow for instantaneous effects and stratospheric adjustment. If the forcing is adjusted for processes occurring on timescales which are short compared to the climate stabilisation timescale then there is little centennial timescale evolution of effective climate sensitivity in any of the GCMs. We suggest that much of the apparent variation in effective climate sensitivity identified in previous studies is actually due to the comparatively fast forcing adjustment. Persistent differences are found in the strength of the feedbacks between the coupled atmosphere - ocean (AO) versions and their atmosphere - mixed-layer ocean (AML) counterparts, (the latter are often assumed to give the equilibrium climate sensitivity of the AOGCM). The AML model can typically only estimate the equilibrium climate sensitivity of the parallel AO version to within about 0.5K. The adjustment to the forcing to account for comparatively fast processes varies in magnitude and sign between GCMs, as well as differing between AO and AML versions of the same model. There is evidence from one AOGCM that the forcing adjustment may take a couple of decades, with implications for observationally based estimates of equilibrium climate sensitivity. We suggest that at least some of the spread in 21st century global temperature predictions between GCMs is due to differing adjustment processes, hence work to understand these differences should be a priority.
ERIC Educational Resources Information Center
Hobbs, Robert Dean
2012-01-01
Evidence-based outcomes in the literature have caused adjustments in neuro-psycholinguistic and sociolinguistic perspectives that indicate a need for a current model of education. Implications from research suggest the new model of education should use a multilingual framework: L3 enhances and reinforces L2 and L1, if L2 and L1 are supported. The…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter M. O.; Scholbrock, Andrew K.
2015-08-14
Wind turbines are typically operated to maximize their performance without considering the impact of wake effects on nearby turbines. Wind plant control concepts aim to increase overall wind plant performance by coordinating the operation of the turbines. This paper focuses on axial-induction-based wind plant control techniques, in which the generator torque or blade pitch degrees of freedom of the wind turbines are adjusted. The paper addresses discrepancies between a high-order wind plant model and an engineering wind plant model. Changes in the engineering model are proposed to better capture the effects of axial-induction-based control shown in the high-order model.
Hines, Cynthia J; Deddens, James A; Coble, Joseph; Kamel, Freya; Alavanja, Michael C R
2011-07-01
To identify and quantify determinants of captan exposure among 74 private orchard pesticide applicators in the Agricultural Health Study (AHS). To adjust an algorithm used for estimating pesticide exposure intensity in the AHS based on these determinants and to compare the correlation of the adjusted and unadjusted algorithms with urinary captan metabolite levels. External exposure metrics included personal air, hand rinse, and dermal patch samples collected from each applicator on 2 days in 2002-2003. A 24-h urine sample was also collected. Exposure determinants were identified for each external metric using multiple linear regression models via the NLMIXED procedure in SAS. The AHS algorithm was adjusted, consistent with the identified determinants. Mixed-effect models were used to evaluate the correlation between the adjusted and unadjusted algorithm and urinary captan metabolite levels. Consistent determinants of captan exposure were a measure of application size (kilogram of captan sprayed or application method), wearing chemical-resistant (CR) gloves and/or a coverall/suit, repairing spray equipment, and product formulation. Application by airblast was associated with a 4- to 5-fold increase in exposure as compared to hand spray. Exposure reduction to the hands, right thigh, and left forearm from wearing CR gloves averaged ∼80%, to the right and left thighs and right forearm from wearing a coverall/suit by ∼70%. Applicators using wettable powder formulations had significantly higher air, thigh, and forearm exposures than those using liquid formulations. Application method weights in the AHS algorithm were adjusted to nine for airblast and two for hand spray; protective equipment reduction factors were adjusted to 0.2 (CR gloves), 0.3 (coverall/suit), and 0.1 (both). Adjustment of application method, CR glove, and coverall weights in the AHS algorithm based on our exposure determinant findings substantially improved the correlation between the AHS algorithm and urinary metabolite levels.
Hines, Cynthia J.; Deddens, James A.; Coble, Joseph; Kamel, Freya; Alavanja, Michael C. R.
2011-01-01
Objectives: To identify and quantify determinants of captan exposure among 74 private orchard pesticide applicators in the Agricultural Health Study (AHS). To adjust an algorithm used for estimating pesticide exposure intensity in the AHS based on these determinants and to compare the correlation of the adjusted and unadjusted algorithms with urinary captan metabolite levels. Methods: External exposure metrics included personal air, hand rinse, and dermal patch samples collected from each applicator on 2 days in 2002–2003. A 24-h urine sample was also collected. Exposure determinants were identified for each external metric using multiple linear regression models via the NLMIXED procedure in SAS. The AHS algorithm was adjusted, consistent with the identified determinants. Mixed-effect models were used to evaluate the correlation between the adjusted and unadjusted algorithm and urinary captan metabolite levels. Results: Consistent determinants of captan exposure were a measure of application size (kilogram of captan sprayed or application method), wearing chemical-resistant (CR) gloves and/or a coverall/suit, repairing spray equipment, and product formulation. Application by airblast was associated with a 4- to 5-fold increase in exposure as compared to hand spray. Exposure reduction to the hands, right thigh, and left forearm from wearing CR gloves averaged ∼80%, to the right and left thighs and right forearm from wearing a coverall/suit by ∼70%. Applicators using wettable powder formulations had significantly higher air, thigh, and forearm exposures than those using liquid formulations. Application method weights in the AHS algorithm were adjusted to nine for airblast and two for hand spray; protective equipment reduction factors were adjusted to 0.2 (CR gloves), 0.3 (coverall/suit), and 0.1 (both). Conclusions: Adjustment of application method, CR glove, and coverall weights in the AHS algorithm based on our exposure determinant findings substantially improved the correlation between the AHS algorithm and urinary metabolite levels. PMID:21427168
NASA Astrophysics Data System (ADS)
Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying
Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.
DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.
Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P
2015-12-01
Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.
NASA Astrophysics Data System (ADS)
Völker, Benjamin; Landis, Chad M.; Kamlah, Marc
2012-03-01
Within a knowledge-based multiscale simulation approach for ferroelectric materials, the atomic level can be linked to the mesoscale by transferring results from first-principles calculations into a phase-field model. A recently presented routine (Völker et al 2011 Contin. Mech. Thermodyn. 23 435-51) for adjusting the Helmholtz free energy coefficients to intrinsic and extrinsic ferroelectric material properties obtained by DFT calculations and atomistic simulations was subject to certain limitations: caused by too small available degrees of freedom, an independent adjustment of the spontaneous strains and piezoelectric coefficients was not possible, and the elastic properties could only be considered in cubic instead of tetragonal symmetry. In this work we overcome such restrictions by expanding the formulation of the free energy function, i.e. by motivating and introducing new higher-order terms that have not appeared in the literature before. Subsequently we present an improved version of the adjustment procedure for the free energy coefficients that is solely based on input parameters from first-principles calculations performed by Marton and Elsässer, as documented in Völker et al (2011 Contin. Mech. Thermodyn. 23 435-51). Full sets of adjusted free energy coefficients for PbTiO3 and tetragonal Pb(Zr,Ti)O3 are presented, and the benefits of the newly introduced higher-order free energy terms are discussed.
NASA Astrophysics Data System (ADS)
Zack, J. W.
2015-12-01
Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble, which is a case-matching scheme. The presentation will provide (1) an overview of each method and the experimental design, (2) performance comparisons based on standard metrics such as bias, MAE and RMSE, (3) a summary of the performance characteristics of each approach and (4) a preview of further experiments to be conducted.
Apparatus and method for controlling autotroph cultivation
Fuxman, Adrian M; Tixier, Sebastien; Stewart, Gregory E; Haran, Frank M; Backstrom, Johan U; Gerbrandt, Kelsey
2013-07-02
A method includes receiving at least one measurement of a dissolved carbon dioxide concentration of a mixture of fluid containing an autotrophic organism. The method also includes determining an adjustment to one or more manipulated variables using the at least one measurement. The method further includes generating one or more signals to modify the one or more manipulated variables based on the determined adjustment. The one or more manipulated variables could include a carbon dioxide flow rate, an air flow rate, a water temperature, and an agitation level for the mixture. At least one model relates the dissolved carbon dioxide concentration to one or more manipulated variables, and the adjustment could be determined by using the at least one model to drive the dissolved carbon dioxide concentration to at least one target that optimize a goal function. The goal function could be to optimize biomass growth rate, nutrient removal and/or lipid production.
Funding issues for Victorian hospitals: the risk-adjusted vision beyond casemix funding.
Antioch, K; Walsh, M
2000-01-01
This paper discusses casemix funding issues in Victoria impacting on teaching hospitals. For casemix payments to be acceptable, the average price and cost weights must be set at an appropriate standard. The average price is based on a normative, policy basis rather than benchmarking. The 'averaging principle' inherent in cost weights has resulted in some AN-DRG weights being too low for teaching hospitals that are key State-wide providers of high complexity services such as neurosurgery and trauma. Casemix data have been analysed using international risk adjustment methodologies to successfully negotiate with the Victorian State Government for specified grants for several high complexity AN-DRGs. A risk-adjusted capitation funding model has also been developed for cystic fibrosis patients treated by The Alfred, called an Australian Health Maintenance Organisation (AHMO). This will facilitate the development of similar models by both the Victorian and Federal governments.
Statistical primer: propensity score matching and its alternatives.
Benedetto, Umberto; Head, Stuart J; Angelini, Gianni D; Blackstone, Eugene H
2018-06-01
Propensity score (PS) methods offer certain advantages over more traditional regression methods to control for confounding by indication in observational studies. Although multivariable regression models adjust for confounders by modelling the relationship between covariates and outcome, the PS methods estimate the treatment effect by modelling the relationship between confounders and treatment assignment. Therefore, methods based on the PS are not limited by the number of events, and their use may be warranted when the number of confounders is large, or the number of outcomes is small. The PS is the probability for a subject to receive a treatment conditional on a set of baseline characteristics (confounders). The PS is commonly estimated using logistic regression, and it is used to match patients with similar distribution of confounders so that difference in outcomes gives unbiased estimate of treatment effect. This review summarizes basic concepts of the PS matching and provides guidance in implementing matching and other methods based on the PS, such as stratification, weighting and covariate adjustment.
Gait Planning and Stability Control of a Quadruped Robot
Li, Junmin; Wang, Jinge; Yang, Simon X.; Zhou, Kedong; Tang, Huijuan
2016-01-01
In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype. PMID:27143959
Gait Planning and Stability Control of a Quadruped Robot.
Li, Junmin; Wang, Jinge; Yang, Simon X; Zhou, Kedong; Tang, Huijuan
2016-01-01
In order to realize smooth gait planning and stability control of a quadruped robot, a new controller algorithm based on CPG-ZMP (central pattern generator-zero moment point) is put forward in this paper. To generate smooth gait and shorten the adjusting time of the model oscillation system, a new CPG model controller and its gait switching strategy based on Wilson-Cowan model are presented in the paper. The control signals of knee-hip joints are obtained by the improved multi-DOF reduced order control theory. To realize stability control, the adaptive speed adjustment and gait switch are completed by the real-time computing of ZMP. Experiment results show that the quadruped robot's gaits are efficiently generated and the gait switch is smooth in the CPG control algorithm. Meanwhile, the stability of robot's movement is improved greatly with the CPG-ZMP algorithm. The algorithm in this paper has good practicability, which lays a foundation for the production of the robot prototype.
Paternal age at childbirth and eating disorders in offspring.
Javaras, K N; Rickert, M E; Thornton, L M; Peat, C M; Baker, J H; Birgegård, A; Norring, C; Landén, M; Almqvist, C; Larsson, H; Lichtenstein, P; Bulik, C M; D'Onofrio, B M
2017-02-01
Advanced paternal age at childbirth is associated with psychiatric disorders in offspring, including schizophrenia, bipolar disorder and autism. However, few studies have investigated paternal age's relationship with eating disorders in offspring. In a large, population-based cohort, we examined the association between paternal age and offspring eating disorders, and whether that association remains after adjustment for potential confounders (e.g. parental education level) that may be related to late/early selection into fatherhood and to eating disorder incidence. Data for 2 276 809 individuals born in Sweden 1979-2001 were extracted from Swedish population and healthcare registers. The authors used Cox proportional hazards models to examine the effect of paternal age on the first incidence of healthcare-recorded anorexia nervosa (AN) and all eating disorders (AED) occurring 1987-2009. Models were adjusted for sex, birth order, maternal age at childbirth, and maternal and paternal covariates including country of birth, highest education level, and lifetime psychiatric and criminal history. Even after adjustment for covariates including maternal age, advanced paternal age was associated with increased risk, and younger paternal age with decreased risk, of AN and AED. For example, the fully adjusted hazard ratio for the 45+ years (v. the 25-29 years) paternal age category was 1.32 [95% confidence interval (CI) 1.14-1.53] for AN and 1.26 (95% CI 1.13-1.40) for AED. In this large, population-based cohort, paternal age at childbirth was positively associated with eating disorders in offspring, even after adjustment for potential confounders. Future research should further explore potential explanations for the association, including de novo mutations in the paternal germline.
Use of medical care biases associations between Parkinson disease and other medical conditions.
Gross, Anat; Racette, Brad A; Camacho-Soto, Alejandra; Dube, Umber; Searles Nielsen, Susan
2018-06-12
To examine how use of medical care biases the well-established associations between Parkinson disease (PD) and smoking, smoking-related cancers, and selected positively associated comorbidities. We conducted a population-based, case-control study of 89,790 incident PD cases and 118,095 randomly selected controls, all Medicare beneficiaries aged 66 to 90 years. We ascertained PD and other medical conditions using ICD-9-CM codes from comprehensive claims data for the 5 years before PD diagnosis/reference. We used logistic regression to estimate age-, sex-, and race-adjusted odds ratios (ORs) between PD and each other medical condition of interest. We then examined the effect of also adjusting for selected geographic- or individual-level indicators of use of care. Models without adjustment for use of care and those that adjusted for geographic-level indicators produced similar ORs. However, adjustment for individual-level indicators consistently decreased ORs: Relative to ORs without adjustment for use of care, all ORs were between 8% and 58% lower, depending on the medical condition and the individual-level indicator of use of care added to the model. ORs decreased regardless of whether the established association is known to be positive or inverse. Most notably, smoking and smoking-related cancers were positively associated with PD without adjustment for use of care, but appropriately became inversely associated with PD with adjustment for use of care. Use of care should be considered when evaluating associations between PD and other medical conditions to ensure that positive associations are not attributable to bias and that inverse associations are not masked. © 2018 American Academy of Neurology.
PID feedback controller used as a tactical asset allocation technique: The G.A.M. model
NASA Astrophysics Data System (ADS)
Gandolfi, G.; Sabatini, A.; Rossolini, M.
2007-09-01
The objective of this paper is to illustrate a tactical asset allocation technique utilizing the PID controller. The proportional-integral-derivative (PID) controller is widely applied in most industrial processes; it has been successfully used for over 50 years and it is used by more than 95% of the plants processes. It is a robust and easily understood algorithm that can provide excellent control performance in spite of the diverse dynamic characteristics of the process plant. In finance, the process plant, controlled by the PID controller, can be represented by financial market assets forming a portfolio. More specifically, in the present work, the plant is represented by a risk-adjusted return variable. Money and portfolio managers’ main target is to achieve a relevant risk-adjusted return in their managing activities. In literature and in the financial industry business, numerous kinds of return/risk ratios are commonly studied and used. The aim of this work is to perform a tactical asset allocation technique consisting in the optimization of risk adjusted return by means of asset allocation methodologies based on the PID model-free feedback control modeling procedure. The process plant does not need to be mathematically modeled: the PID control action lies in altering the portfolio asset weights, according to the PID algorithm and its parameters, Ziegler-and-Nichols-tuned, in order to approach the desired portfolio risk-adjusted return efficiently.
A Developmental Sequence Model to University Adjustment of International Undergraduate Students
ERIC Educational Resources Information Center
Chavoshi, Saeid; Wintre, Maxine Gallander; Dentakos, Stella; Wright, Lorna
2017-01-01
The current study proposes a Developmental Sequence Model to University Adjustment and uses a multifaceted measure, including academic, social and psychological adjustment, to examine factors predictive of undergraduate international student adjustment. A hierarchic regression model is carried out on the Student Adaptation to College Questionnaire…
ERIC Educational Resources Information Center
Lopez, Maria Jose Gonzalez
2006-01-01
The search for more flexibility in financial management of public universities demands adjustments in budgeting strategies. International studies on this topic recommend wider financial autonomy for management units, the use of budgeting models based on performance, the implementation of formula systems for the determination of financial needs of…
Development of a Medicaid Behavioral Health Case-Mix Model
ERIC Educational Resources Information Center
Robst, John
2009-01-01
Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…
The subject paper describes a procedure for adjusting a risk model based upon a measure of personal exposure (the "UK personal exposure model") in order to attribute an expected rate of gastroenteritis among a group of swimmers to a mean recreational water quality value (enteroco...
Han, L. F; Plummer, Niel
2016-01-01
Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of 13C values.In contrast to the single-sample-based models, the extended Gonfiantini & Zuppi model (Gonfiantini and Zuppi, 2003; Han et al., 2014) is a statistical approach. This approach can be used to estimate 14C ages when a curved relationship between the 14C and 13C values of the DIC data is observed. In addition to estimation of groundwater ages, the relationship between 14C and δ13C data can be used to interpret hydrogeological characteristics of the aquifer, e.g. estimating apparent rates of geochemical reactions and revealing the complexity of the geochemical environment, and identify samples that are not affected by the same set of reactions/processes as the rest of the dataset. The investigated water samples may have a wide range of ages, and for waters with very low values of 14C, the model based on statistics may give more reliable age estimates than those obtained from single-sample-based models. In the extended Gonfiantini & Zuppi model, a representative system-wide value of the initial 14C content is derived from the 14C and δ13C data of DIC and can differ from that used in single-sample-based models. Therefore, the extended Gonfiantini & Zuppi model usually avoids the effect of modern water components which might retain ‘bomb’ pulse signatures.The geochemical mass-balance approach constructs an adjustment model that accounts for all the geochemical reactions known to occur along an aquifer flow path (Plummer et al., 1983; Wigley et al., 1978; Plummer et al., 1994; Plummer and Glynn, 2013), and includes, in addition to DIC, dissolved organic carbon (DOC) and methane (CH4). If sufficient chemical, mineralogical and isotopic data are available, the geochemical mass-balance method can yield the most accurate estimates of the adjusted radiocarbon age. The main limitation of this approach is that complete information is necessary on chemical, mineralogical and isotopic data and these data are often limited.Failure to recognize the limitations and underlying assumptions on which the various models and approaches are based can result in a wide range of estimates of 14C0 and limit the usefulness of radiocarbon as a dating tool for groundwater. In each of the three generalized approaches (single-sample-based models, statistical approach, and geochemical mass-balance approach), successful application depends on scrutiny of the isotopic (14C and 13C) and chemical data to conceptualize the reactions and processes that affect the 14C content of DIC in aquifers. The recently developed graphical analysis method is shown to aid in determining which approach is most appropriate for the isotopic and chemical data from a groundwater system.
Research on Environmental Adjustment of Cloud Ranch Based on BP Neural Network PID Control
NASA Astrophysics Data System (ADS)
Ren, Jinzhi; Xiang, Wei; Zhao, Lin; Wu, Jianbo; Huang, Lianzhen; Tu, Qinggang; Zhao, Heming
2018-01-01
In order to make the intelligent ranch management mode replace the traditional artificial one gradually, this paper proposes a pasture environment control system based on cloud server, and puts forward the PID control algorithm based on BP neural network to control temperature and humidity better in the pasture environment. First, to model the temperature and humidity (controlled object) of the pasture, we can get the transfer function. Then the traditional PID control algorithm and the PID one based on BP neural network are applied to the transfer function. The obtained step tracking curves can be seen that the PID controller based on BP neural network has obvious superiority in adjusting time and error, etc. This algorithm, calculating reasonable control parameters of the temperature and humidity to control environment, can be better used in the cloud service platform.
Yang, Y-M; Lee, J; Kim, Y-I; Cho, B-H; Park, S-B
2014-08-01
This study aimed to determine the viability of using axial cervical vertebrae (ACV) as biological indicators of skeletal maturation and to build models that estimate ossification level with improved explanatory power over models based only on chronological age. The study population comprised 74 female and 47 male patients with available hand-wrist radiographs and cone-beam computed tomography images. Generalized Procrustes analysis was used to analyze the shape, size, and form of the ACV regions of interest. The variabilities of these factors were analyzed by principal component analysis. Skeletal maturation was then estimated using a multiple regression model. Separate models were developed for male and female participants. For the female estimation model, the adjusted R(2) explained 84.8% of the variability of the Sempé maturation level (SML), representing a 7.9% increase in SML explanatory power over that using chronological age alone (76.9%). For the male estimation model, the adjusted R(2) was over 90%, representing a 1.7% increase relative to the reference model. The simplest possible ACV morphometric information provided a statistically significant explanation of the portion of skeletal-maturation variability not dependent on chronological age. These results verify that ACV is a strong biological indicator of ossification status. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models
NASA Astrophysics Data System (ADS)
Jacquot, K.; Chevrier, C.; Halin, G.
2013-02-01
Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.
Rose, Amanda J.; Rudolph, Karen D.
2011-01-01
Theory and research on sex differences in adjustment focus largely on parental, societal, and biological influences. However, it also is important to consider how peers contribute to girls’ and boys’ development. This paper provides a critical review of sex differences in: several peer-relationship processes, including behavioral and social-cognitive styles, stress and coping, and relationship provisions. Based on this review, a speculative peer-socialization model is presented that considers the implications of these sex differences for girls’ and boys’ emotional and behavioral development. Central to this model is the idea that sex-linked relationship processes have costs and benefits for girls’ and boys’ adjustment. Finally, we present recent research testing certain model components and propose approaches for testing understudied aspects of the model. PMID:16435959
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Duthie, A Bradley; Bocedi, Greta; Reid, Jane M
2016-09-01
Polyandry is often hypothesized to evolve to allow females to adjust the degree to which they inbreed. Multiple factors might affect such evolution, including inbreeding depression, direct costs, constraints on male availability, and the nature of polyandry as a threshold trait. Complex models are required to evaluate when evolution of polyandry to adjust inbreeding is predicted to arise. We used a genetically explicit individual-based model to track the joint evolution of inbreeding strategy and polyandry defined as a polygenic threshold trait. Evolution of polyandry to avoid inbreeding only occurred given strong inbreeding depression, low direct costs, and severe restrictions on initial versus additional male availability. Evolution of polyandry to prefer inbreeding only occurred given zero inbreeding depression and direct costs, and given similarly severe restrictions on male availability. However, due to its threshold nature, phenotypic polyandry was frequently expressed even when strongly selected against and hence maladaptive. Further, the degree to which females adjusted inbreeding through polyandry was typically very small, and often reflected constraints on male availability rather than adaptive reproductive strategy. Evolution of polyandry solely to adjust inbreeding might consequently be highly restricted in nature, and such evolution cannot necessarily be directly inferred from observed magnitudes of inbreeding adjustment. © 2016 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Scivoletto, Giorgio; Glass, Clive; Anderson, Kim D; Galili, Tal; Benjamin, Yoav; Front, Lilach; Aidinoff, Elena; Bluvshtein, Vadim; Itzkovich, Malka; Aito, Sergio; Baroncini, Ilaria; Benito-Penalva, Jesùs; Castellano, Simona; Osman, Aheed; Silva, Pedro; Catz, Amiram
2015-01-01
Background. A quadratic formula of the Spinal Cord Injury Ability Realization Measurement Index (SCI-ARMI) has previously been published. This formula was based on a model of Spinal Cord Independence Measure (SCIM95), the 95th percentile of the SCIM III values, which correspond with the American Spinal Injury Association Motor Scores (AMS) of SCI patients. Objective. To further develop the original formula. Setting. Spinal cord injury centers from 6 countries and the Statistical Laboratory, Tel-Aviv University, Israel. Methods. SCIM95 of 661 SCI patients was modeled, using a quantile regression with or without adjustment for age and gender, to calculate SCI-ARMI values. SCI-ARMI gain during rehabilitation and its correlations were examined. Results. A new quadratic SCIM95 model was created. This resembled the previously published model, which yielded similar SCIM95 values in all the countries, after adjustment for age and gender. Without this adjustment, however, only 86% of the non-Israeli SCIM III observations were lower than those SCIM95 values (P < .0001). Adding the variables age and gender to the new model affected the SCIM95 value significantly (P < .04). Adding country information did not add a significant effect (P > .1). SCI-ARMI gain was positive (38.8 ± 22 points, P < .0001) and correlated weakly with admission age and AMS. Conclusions. The original quadratic SCI-ARMI formula is valid for an international population after adjustment for age and gender. The new formula considers more factors that affect functional ability following SCI. © The Author(s) 2014.
Caffeine Citrate Dosing Adjustments to Assure Stable Caffeine Concentrations in Preterm Neonates.
Koch, Gilbert; Datta, Alexandre N; Jost, Kerstin; Schulzke, Sven M; van den Anker, John; Pfister, Marc
2017-12-01
To identify dosing strategies that will assure stable caffeine concentrations in preterm neonates despite changing caffeine clearance during the first 8 weeks of life. A 3-step simulation approach was used to compute caffeine doses that would achieve stable caffeine concentrations in the first 8 weeks after birth: (1) a mathematical weight change model was developed based on published weight distribution data; (2) a pharmacokinetic model was developed based on published models that accounts for individual body weight, postnatal, and gestational age on caffeine clearance and volume of distribution; and (3) caffeine concentrations were simulated for different dosing regimens. A standard dosing regimen of caffeine citrate (using a 20 mg/kg loading dose and 5 mg/kg/day maintenance dose) is associated with a maximal trough caffeine concentration of 15 mg/L after 1 week of treatment. However, trough concentrations subsequently exhibit a clinically relevant decrease because of increasing clearance. Model-based simulations indicate that an adjusted maintenance dose of 6 mg/kg/day in the second week, 7 mg/kg/day in the third to fourth week and 8 mg/kg/day in the fifth to eighth week assures stable caffeine concentrations with a target trough concentration of 15 mg/L. To assure stable caffeine concentrations during the first 8 weeks of life, the caffeine citrate maintenance dose needs to be increased by 1 mg/kg every 1-2 weeks. These simple adjustments are expected to maintain exposure to stable caffeine concentrations throughout this important developmental period and might enhance both the short- and long-term beneficial effects of caffeine treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Odonkor, Charles A.; Schonberger, Robert B.; Dai, Feng; Shelley, Kirk H.; Silverman, David G.; Barash, Paul G.
2013-01-01
Objective The primary aim of this study was to design prediction models based on a functional marker (preoperative gait-speed) to predict readiness for home discharge time of ≤ 90 minutes, and to identify those at risk for unplanned admissions, after elective ambulatory surgery. Design This prospective observational cohort study evaluated all patients scheduled for elective ambulatory surgery. Home discharge readiness and unplanned admissions were the primary outcomes. Independent variables included preoperative gait speed, heart rate, and total anesthesia time. The relationship between all predictors and each primary outcome was determined in separate multivariable logistic regression models. Results After adjustment for covariates, gait speed with adjusted odds ratio = 3.71 (95% CI: 1.21-11.26), p=0.02; was independently associated with early home discharge readiness ≤90 minutes. Importantly, gait speed dichotomized as greater or less than 1 m/s predicted unplanned admissions with odds ratio = 0.35 (95% CI: 0.16 to 0.76, p=0.008) for those with speeds ≥ 1 m/s in comparison to those with speed < 1 m/s. In a separate model, prior history of cardiac surgery with adjusted odds ratio =7.5 (95% CI: 2.34-24.41)(p=0.001) was independently associated with unplanned admissions after elective ambulatory surgery, when other covariates were held constant. Conclusions This study demonstrates use of novel prediction models based on gait speed testing to predict early home discharge and to identify those patients at risk for unplanned admissions, after elective ambulatory surgery. PMID:24051992
Code of Federal Regulations, 2010 CFR
2017-10-01
... PROGRAMS EPISODE PAYMENT MODEL Pricing and Payment § 512.300 Determination of episode quality-adjusted... historical episode payments. (iii) For the AMI model, quality-adjusted target prices for anchor MS-DRGs 246... 100 percent regional historical episode payments. (iv) For the CABG model, quality-adjusted target...
Controlled cooling of an electronic system for reduced energy consumption
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Risk-adjusted payment and performance assessment for primary care.
Ash, Arlene S; Ellis, Randall P
2012-08-01
Many wish to change incentives for primary care practices through bundled population-based payments and substantial performance feedback and bonus payments. Recognizing patient differences in costs and outcomes is crucial, but customized risk adjustment for such purposes is underdeveloped. Using MarketScan's claims-based data on 17.4 million commercially insured lives, we modeled bundled payment to support expected primary care activity levels (PCAL) and 9 patient outcomes for performance assessment. We evaluated models using 457,000 people assigned to 436 primary care physician panels, and among 13,000 people in a distinct multipayer medical home implementation with commercially insured, Medicare, and Medicaid patients. Each outcome is separately predicted from age, sex, and diagnoses. We define the PCAL outcome as a subset of all costs that proxies the bundled payment needed for comprehensive primary care. Other expected outcomes are used to establish targets against which actual performance can be fairly judged. We evaluate model performance using R(2)'s at patient and practice levels, and within policy-relevant subgroups. The PCAL model explains 67% of variation in its outcome, performing well across diverse patient ages, payers, plan types, and provider specialties; it explains 72% of practice-level variation. In 9 performance measures, the outcome-specific models explain 17%-86% of variation at the practice level, often substantially outperforming a generic score like the one used for full capitation payments in Medicare: for example, with grouped R(2)'s of 47% versus 5% for predicting "prescriptions for antibiotics of concern." Existing data can support the risk-adjusted bundled payment calculations and performance assessments needed to encourage desired transformations in primary care.
Faculty Adaptation to an Experimental Curriculum.
ERIC Educational Resources Information Center
Moore-West, Maggi; And Others
The adjustment of medical school faculty members to a new curriculum, called problem-based learning, was studied. Nineteen faculty members who taught in both a lecture-based and tutorial program over 2 academic years were surveyed. Besides the teacher-centered approach, the other model of learning was student-centered and could be conducted in…
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking
Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng
2017-01-01
Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243
An Interdependent Look at Perceptions of Spousal Drinking Problems and Marital Outcomes
Rodriguez, Lindsey M.; Neighbors, Clayton
2015-01-01
Research indicates a bidirectional association between heavy alcohol use and marital quality among couples. The current research extends previous research on the role of interpersonal perception by examining how partner drinking and perceiving one’s partner’s drinking as problematic are associated with subsequent marital outcomes. Moreover, we evaluated how perceiving one’s partner to have a drinking problem was associated with marital functioning, and whether that association differed based on the partner’s actual drinking. Married couples (N = 123 dyads) with at least one spouse who consumed alcohol regularly completed measures of alcohol use and consequences, the perception that their spouse’s drinking was problematic, and marital adjustment (i.e., relationship satisfaction, commitment, and trust). Results from actor-partner interdependence models using structural equations modeling indicated that for husbands, partner heavy drinking was associated with lower adjustment. Additionally, for husbands, perceiving their spouse had a drinking problem was associated with lower adjustment for both themselves and their wives. Moreover, significant interactions between partner drinking and the perception of partner drinking problem on marital adjustment emerged, controlling for amount of consumption. Specifically, perceiving one’s partner’s drinking as a problem was only negatively associated with relationship adjustment if the partner reported higher levels of heavy drinking. This pattern was stronger for husbands. Results illustrate the importance of interpersonal perception, gender differences, and the use of dyadic data to model the complex dynamic between spouses with regard to alcohol use and how it affects relationship outcomes. PMID:26091752
NASA Astrophysics Data System (ADS)
Kang, J. H.; Song, H. J.; Han, H. J.; Ha, J. H.
2016-12-01
The observation processing system, KPOP (KIAPS - Korea Institute of Atmospheric Prediction Systems - Package for Observation Processing) have developed to provide optimal observations to the data assimilation system for the KIAPS Integrated Model (KIM). Currently, the KPOP has capable of processing almost all of observations for the KMA (Korea Meteorological Administration) operational global data assimilation system. The height adjustment of SURFACE observations are essential for the quality control due to the difference in height between observation station and model topography. For the SURFACE observation, it is usual to adjust the height using lapse rate or hypsometric equation, which decides values mainly depending on the difference of height. We have a question of whether the height can be properly adjusted following to the linear or exponential relationship solely with regard to the difference of height, with disregard the atmospheric conditions. In this study, firstly we analyse the change of surface variables such as temperature (T2m), pressure (Psfc), humidity (RH2m and Q2m), and wind components (U and V) according to the height difference. Additionally, we look further into the relationships among surface variables . The difference of pressure shows a strong linear relationship with difference of height. But the difference of temperature according to the height shows a significant correlation with difference of relative humidity than with the height difference. A development of reliable model for the height-adjustment of surface temperature is being undertaken based on the preliminary results.
Basu, Sanjay; Yudkin, John S; Sussman, Jeremy B; Millett, Christopher; Hayward, Rodney A
2016-03-01
The World Health Organization aims to reduce mortality from chronic diseases including cardiovascular disease (CVD) by 25% by 2025. High blood pressure is a leading CVD risk factor. We sought to compare 3 strategies for treating blood pressure in China and India: a treat-to-target (TTT) strategy emphasizing lowering blood pressure to a target, a benefit-based tailored treatment (BTT) strategy emphasizing lowering CVD risk, or a hybrid strategy currently recommended by the World Health Organization. We developed a microsimulation model of adults aged 30 to 70 years in China and in India to compare the 2 treatment approaches across a 10-year policy-planning horizon. In the model, a BTT strategy treating adults with a 10-year CVD event risk of ≥ 10% used similar financial resources but averted ≈ 5 million more disability-adjusted life-years in both China and India than a TTT approach based on current US guidelines. The hybrid strategy in the current World Health Organization guidelines produced no substantial benefits over TTT. BTT was more cost-effective at $205 to $272/disability-adjusted life-year averted, which was $142 to $182 less per disability-adjusted life-year than TTT or hybrid strategies. The comparative effectiveness of BTT was robust to uncertainties in CVD risk estimation and to variations in the age range analyzed, the BTT treatment threshold, or rates of treatment access, adherence, or concurrent statin therapy. In model-based analyses, a simple BTT strategy was more effective and cost-effective than TTT or hybrid strategies in reducing mortality. © 2016 American Heart Association, Inc.
Takeuchi, Masato; Yano, Ikuko; Ito, Satoko; Sugimoto, Mitsuhiro; Yamamoto, Shota; Yonezawa, Atsushi; Ikeda, Akio; Matsubara, Kazuo
2017-04-01
Topiramate is a second-generation antiepileptic drug used as monotherapy and adjunctive therapy in adults and children with partial seizures. A population pharmacokinetic (PPK) analysis was performed to improve the topiramate dosage adjustment for individualized treatment. Patients whose steady-state serum concentration of topiramate was routinely monitored at Kyoto University Hospital from April 2012 to March 2013 were included in the model-building data. A nonlinear mixed effects modeling program was used to evaluate the influence of covariates on topiramate pharmacokinetics. The obtained PPK model was evaluated by internal model validations, including goodness-of-fit plots and prediction-corrected visual predictive checks, and was externally confirmed using the validation data from January 2015 to December 2015. A total of 177 steady-state serum concentrations from 93 patients were used for the model-building analysis. The patients' age ranged from 2 to 68 years, and body weight ranged from 8.6 to 105 kg. The median serum concentration of topiramate was 1.7 mcg/mL, and half of the patients received carbamazepine coadministration. Based on a one-compartment model with first order absorption and elimination, the apparent volume of distribution was 105 L/70 kg, and the apparent clearance was allometrically related to the body weight as 2.25 L·h·70 kg without carbamazepine or phenytoin. Combination treatment with carbamazepine or phenytoin increased the apparent clearance to 3.51 L·h·70 kg. Goodness-of-fit plots, prediction-corrected visual predictive check, and external validation using the validation data from 43 patients confirmed an appropriateness of the final model. Simulations based on the final model showed that dosage adjustments allometrically scaling to body weight can equalize the serum concentrations in children of various ages and adults. The PPK model, using the power scaling of body weight, effectively elucidated the topiramate serum concentration profile ranging from pediatric to adult patients. Dosage adjustments based on body weight and concomitant antiepileptic drug help obtain the dosage of topiramate necessary to reach an effective concentration in each individual.
Schachner, Maja K; He, Jia; Heizmann, Boris; Van de Vijver, Fons J R
2017-01-01
School adjustment determines long-term adjustment in society. Yet, immigrant youth do better in some countries than in others. Drawing on acculturation research (Berry, 1997; Ward, 2001) and self-determination theory (Ryan and Deci, 2000), we investigated indirect effects of adolescent immigrants' acculturation orientations on school adjustment (school-related attitudes, truancy, and mathematics achievement) through school belonging. Analyses were based on data from the Programme for International Student Assessment from six European countries, which were combined into three clusters based on their migrant integration and multicultural policies: Those with the most supportive policies (Belgium and Finland), those with moderately supportive policies (Italy and Portugal), and those with the most unsupportive policies (Denmark and Slovenia). In a multigroup path model, we confirmed most associations. As expected, mainstream orientation predicted higher belonging and better outcomes in all clusters, whereas the added value of students' ethnic orientation was only observed in some clusters. Results are discussed in terms of differences in acculturative climate and policies between countries of settlement.
NASA Astrophysics Data System (ADS)
Korotenko, K.
2003-04-01
An ultra-high-resolution version of DieCAST was adjusted for the Adriatic Sea and coupled with an oil spill model. Hydrodynamic module was developed on base of th low dissipative, four-order-accuracy version DieCAST with the resolution of ~2km. The oil spill model was developed on base of particle tracking technique The effect of evaporation is modeled with an original method developed on the base of the pseudo-component approach. A special dialog interface of this hybrid system allowing direct coupling to meteorlogical data collection systems or/and meteorological models. Experiments with hypothetic oil spill are analyzed for the Northern Adriatic Sea. Results (animations) of mesoscale circulation and oil slick modeling are presented at wabsite http://thayer.dartmouth.edu/~cushman/adriatic/movies/
Mostafa, Salama A; Mustapha, Aida; Mohammed, Mazin Abed; Ahmad, Mohd Sharifuddin; Mahmoud, Moamin A
2018-04-01
Autonomous agents are being widely used in many systems, such as ambient assisted-living systems, to perform tasks on behalf of humans. However, these systems usually operate in complex environments that entail uncertain, highly dynamic, or irregular workload. In such environments, autonomous agents tend to make decisions that lead to undesirable outcomes. In this paper, we propose a fuzzy-logic-based adjustable autonomy (FLAA) model to manage the autonomy of multi-agent systems that are operating in complex environments. This model aims to facilitate the autonomy management of agents and help them make competent autonomous decisions. The FLAA model employs fuzzy logic to quantitatively measure and distribute autonomy among several agents based on their performance. We implement and test this model in the Automated Elderly Movements Monitoring (AEMM-Care) system, which uses agents to monitor the daily movement activities of elderly users and perform fall detection and prevention tasks in a complex environment. The test results show that the FLAA model improves the accuracy and performance of these agents in detecting and preventing falls. Copyright © 2018 Elsevier B.V. All rights reserved.
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils
NASA Astrophysics Data System (ADS)
González, A.; Gomez-Iradi, S.; Munduate, X.
2014-06-01
From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling.
On the Relationship between Observed NLDN Lightning ...
Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs
Squitieri, Lee; Chung, Kevin C
2017-07-01
In 2015, the U.S. Congress passed the Medicare Access and Children's Health Insurance Program Reauthorization Act, which effectively repealed the Centers for Medicare and Medicaid Services sustainable growth rate formula and established the Centers for Medicare and Medicaid Services Quality Payment Program. The Medicare Access and Children's Health Insurance Program Reauthorization Act represents an unparalleled acceleration toward value-based payment models and a departure from traditional volume-driven fee-for-service reimbursement. The Quality Payment Program includes two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. The Merit-Based Incentive Payment System pathway replaces existing quality reporting programs and adds several new measures to create a composite performance score for each provider (or provider group) that will be used to adjust reimbursed payment. The advanced alternative payment model pathway is available to providers who participate in qualifying Advanced Alternative Payment Models and is associated with an initial 5 percent payment incentive. The first performance period for the Merit-Based Incentive Payment System opens January 1, 2017, and closes on December 31, 2017, and is associated with payment adjustments in January of 2019. The Centers for Medicare and Medicaid Services estimates that the majority of providers will begin participation in 2017 through the Merit-Based Incentive Payment System pathway, but aims to have 50 percent of payments tied to quality or value through Advanced Alternative Payment Models by 2018. In this article, the authors describe key components of the Medicare Access and Children's Health Insurance Program Reauthorization Act to providers navigating through the Quality Payment Program and discuss how plastic surgeons may optimize their performance in this new value-based payment program.
The Economic Impact of Blindness in Europe.
Chakravarthy, Usha; Biundo, Eliana; Saka, Rasit Omer; Fasser, Christina; Bourne, Rupert; Little, Julie-Anne
2017-08-01
To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) in the population aged >50 years in the European Union (EU). We estimated the cost of lost productivity using three simple models reported in the literature based on (1) minimum wage (MW), (2) gross national income (GNI), and (3) purchasing power parity-adjusted gross domestic product (GDP-PPP) losses. In the first two models, assumptions included that all individuals worked until 65 years of age, and that half of all visual impairment cases in the >50-year age group would be in those aged between 50 and 65 years. Loss of productivity was estimated to be 100% for blind individuals and 30% for those with MSVI. None of these models included direct medical costs related to visual impairment. The estimated number of blind people in the EU population aged >50 years is ~1.28 million, with a further 9.99 million living with MSVI. Based on the three models, the estimated cost of blindness is €7.81 billion, €6.29 billion and €17.29 billion and that of MSVI €18.02 billion, €24.80 billion and €39.23 billion, with their combined costs €25.83 billion, €31.09 billion and €56.52 billion, respectively. The estimates from the MW and adjusted GDP-PPP models were generally comparable, whereas the GNI model estimates were higher, probably reflecting the lack of adjustment for unemployment. The cost of blindness and MSVI in the EU is substantial. Wider use of available cost-effective treatment and prevention strategies may reduce the burden significantly.
An Energy-Aware Runtime Management of Multi-Core Sensory Swarms.
Kim, Sungchan; Yang, Hoeseok
2017-08-24
In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today's sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique.
An Energy-Aware Runtime Management of Multi-Core Sensory Swarms
Kim, Sungchan
2017-01-01
In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today’s sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique. PMID:28837094
Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina
ERIC Educational Resources Information Center
Peek, Lori; Morrissey, Bridget; Marlatt, Holly
2011-01-01
The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…
Kiran, Tara; Kopp, Alexander; Moineddin, Rahim; Glazier, Richard H
2015-11-17
We evaluated a large-scale transition of primary care physicians to blended capitation models and team-based care in Ontario, Canada, to understand the effect of each type of reform on the management and prevention of chronic disease. We used population-based administrative data to assess monitoring of diabetes mellitus and screening for cervical, breast and colorectal cancer among patients belonging to team-based capitation, non-team-based capitation or enhanced fee-for-service medical homes as of Mar. 31, 2011 (n = 10 675 480). We used Poisson regression models to examine these associations for 2011. We then used a fitted nonlinear model to compare changes in outcomes between 2001 and 2011 by type of medical home. In 2011, patients in a team-based capitation setting were more likely than those in an enhanced fee-for-service setting to receive diabetes monitoring (39.7% v. 31.6%, adjusted relative risk [RR] 1.22, 95% confidence interval [CI] 1.18 to 1.25), mammography (76.6% v. 71.5%, adjusted RR 1.06, 95% CI 1.06 to 1.07) and colorectal cancer screening (63.0% v. 60.9%, adjusted RR 1.03, 95% CI 1.02 to 1.04). Over time, patients in medical homes with team-based capitation experienced the greatest improvement in diabetes monitoring (absolute difference in improvement 10.6% [95% CI 7.9% to 13.2%] compared with enhanced fee for service; 6.4% [95% CI 3.8% to 9.1%] compared with non-team-based capitation) and cervical cancer screening (absolute difference in improvement 7.0% [95% CI 5.5% to 8.5%] compared with enhanced fee for service; 5.3% [95% CI 3.8% to 6.8%] compared with non-team-based capitation). For breast and colorectal cancer screening, there were no significant differences in change over time between different types of medical homes. The shift to capitation payment and the addition of team-based care in Ontario were associated with moderate improvements in processes related to diabetes care, but the effects on cancer screening were less clear. © 2015 Canadian Medical Association or its licensors.
Yu, Xiaozhi; Ren, Jindong; Zhang, Qian; Liu, Qun; Liu, Honghao
2017-04-01
Reach envelopes are very useful for the design and layout of controls. In building reach envelopes, one of the key problems is to represent the reach limits accurately and conveniently. Spherical harmonics are proved to be accurate and convenient method for fitting of the reach capability envelopes. However, extensive study are required on what components of spherical harmonics are needed in fitting the envelope surfaces. For applications in the vehicle industry, an inevitable issue is to construct reach limit surfaces with consideration of the seating positions of the drivers, and it is desirable to use population envelopes rather than individual envelopes. However, it is relatively inconvenient to acquire reach envelopes via a test considering the seating positions of the drivers. In addition, the acquired envelopes are usually unsuitable for use with other vehicle models because they are dependent on the current cab packaging parameters. Therefore, it is of great significance to construct reach envelopes for real vehicle conditions based on individual capability data considering seating positions. Moreover, traditional reach envelopes provide little information regarding the assessment of reach difficulty. The application of reach envelopes will improve design quality by providing difficulty-rating information about reach operations. In this paper, using the laboratory data of seated reach with consideration of the subjective difficulty ratings, the method of modeling reach envelopes is studied based on spherical harmonics. The surface fitting using spherical harmonics is conducted for circumstances both with and without seat adjustments. For use with adjustable seat, the seating position model is introduced to re-locate the test data. The surface fitting is conducted for both population and individual reach envelopes, as well as for boundary envelopes. Comparison of the envelopes of adjustable seat and the SAE J287 control reach envelope shows that the latter is nearly at the middle difficulty level. It is also found that the abilities of reach envelope models in expressing the shape of the reach limits based on spherical harmonics depends both on the terms in the model expression and on the data used to fit the envelope surfaces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spaceflight tracking and data network operational reliability assessment for Skylab
NASA Technical Reports Server (NTRS)
Seneca, V. I.; Mlynarczyk, R. H.
1974-01-01
Data on the spaceflight communications equipment status during the Skylab mission were subjected to an operational reliability assessment. Reliability models were revised to reflect pertinent equipment changes accomplished prior to the beginning of the Skylab missions. Appropriate adjustments were made to fit the data to the models. The availabilities are based on the failure events resulting in the stations inability to support a function of functions and the MTBF's are based on all events including 'can support' and 'cannot support'. Data were received from eleven land-based stations and one ship.
NASA Astrophysics Data System (ADS)
Wang, J.; Yin, H.; Chung, F.
2008-12-01
While the population growth, the future land use change, and the desire for better environmental preservation and protection are adding up pressure on water resources management in California, California is facing an extra challenge of addressing potential climate change impacts on water supple and demand in California. The concerns on water facilities planning and flood control caused by climate change include modified precipitation patterns, changes in snow levels and runoff patterns due to increased air temperatures. Although long-term climate projections are largely uncertain, there appears to be a strong consistency in predicting the warming trend of future surface temperature, and the resulting shift in the seasonal patterns of runoff. However, projected changes in precipitation (wetting or drying), which control annual runoff, are far less certain. This paper attempts to separate the effects of warming trend from the effects of precipitation trend on water planning especially in California where reservoir operations are more sensitive to seasonal patterns of runoff than to the total annual runoff. The water resources systems planning model, CALSIM2, is used to evaluate climate change impact on water resource management in California. Rather than directly ingesting estimated streamflows from climate model projections into CALSIM2, a three step perturbation ratio method is proposed to introduce climate change impact into the planning model. Firstly, monthly perturbation ratio of projected monthly inflow to simulated historical monthly inflow is applied to observed historical monthly inflow to generate climate change inflows to major dams and reservoirs. To isolate the effects of warming trend on water resources, a further annual inflow adjustment is applied to the inflows generated in step one to preserve the volume of the observed annual inflow. To re-introduce the effects of precipitation trend on water resources, an additional inflow trend adjustment is applied to the adjusted climate change inflow. Therefore, three CALSIM2 experiments will be implemented: (1) base run with the observed historic inflow (1921 to 2003); (2) sensitivity run with the adjusted climate change inflow through annual inflow adjustment; (3) sensitivity run with the adjusted climate change inflow through annual inflow adjustment and inflow trend adjustment. To account for the variability of various climate models in projecting future climates, the uncertainty in future emission scenarios, and the difference in different projection periods, estimated inflows from 6 climate models for 2 emission scenarios (A2 and B1) and two projection periods (2030-2059 and 2070-2099) are included in the CALSIM model experiments.
Evolving concepts on adjusting human resting energy expenditure measurements for body size.
Heymsfield, S B; Thomas, D; Bosy-Westphal, A; Shen, W; Peterson, C M; Müller, M J
2012-11-01
Establishing if an adult's resting energy expenditure (REE) is high or low for their body size is a pervasive question in nutrition research. Early workers applied body mass and height as size measures and formulated the Surface Law and Kleiber's Law, although each has limitations when adjusting REE. Body composition methods introduced during the mid-20th century provided a new opportunity to identify metabolically homogeneous 'active' compartments. These compartments all show improved correlations with REE estimates over body mass-height approaches, but collectively share a common limitation: REE-body composition ratios are not 'constant' but vary across men and women and with race, age and body size. The now-accepted alternative to ratio-based norms is to adjust for predictors by applying regression models to calculate 'residuals' that establish if an REE is relatively high or low. The distinguishing feature of statistical REE-body composition models is a 'non-zero' intercept of unknown origin. The recent introduction of imaging methods has allowed development of physiological tissue-organ-based REE prediction models. Herein, we apply these imaging methods to provide a mechanistic explanation, supported by experimental data, for the non-zero intercept phenomenon and, in that context, propose future research directions for establishing between-subject differences in relative energy metabolism. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.
Modelling the EDLC-based Power Supply Module for a Maneuvering System of a Nanosatellite
NASA Astrophysics Data System (ADS)
Kumarin, A. A.; Kudryavtsev, I. A.
2018-01-01
The development of the model of the power supply module of a maneuvering system of a nanosatellite is described. The module is based on an EDLC battery as an energy buffer. The EDLC choice is described. Experiments are conducted to provide data for model. Simulation of the power supply module is made for charging and discharging of the battery processes. The difference between simulation and experiment does not exceed 0.5% for charging and 10% for discharging. The developed model can be used in early design and to adjust charger and load parameters. The model can be expanded to represent the entire power system.
Mauder, Matthias; Genzel, Sandra; Fu, Jin; ...
2017-11-10
Here, we report non-closure of the surface energy balance is a frequently observed phenomenon of hydrometeorological field measurements, when using the eddy-covariance method, which can be ascribed to an underestimation of the turbulent fluxes. Several approaches have been proposed in order to adjust the measured fluxes for this apparent systematic error. However, there are uncertainties about partitioning of the energy balance residual between the sensible and latent heat flux and whether such a correction should be applied on 30-minute data or longer time scales. The data for this study originate from two grassland sites in southern Germany, where measurements frommore » weighable lysimeters are available as reference. The adjusted evapotranspiration rates are also compared with joint energy and water balance simulations using a physically-based distributed hydrological model. We evaluate two adjustment methods: the first one preserves the Bowen ratio and the correction factor is determined on a daily basis. The second one attributes a smaller portion of the residual energy to the latent heat flux than to the sensible heat flux for closing the energy balance for every 30-minute flux integration interval. Both methods lead to an improved agreement of the eddy-covariance based fluxes with the independent lysimeter estimates and the physically-based model simulations. The first method results in a better comparability of evapotranspiration rates, and the second method leads to a smaller overall bias. These results are similar between both sites despite considerable differences in terrain complexity and grassland management. Moreover, we found that a daily adjustment factor leads to less scatter than a complete partitioning of the residual for every half-hour time interval. Lastly, the vertical temperature gradient in the surface layer and friction velocity were identified as important predictors for a potential future parameterization of the energy balance residual.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauder, Matthias; Genzel, Sandra; Fu, Jin
Here, we report non-closure of the surface energy balance is a frequently observed phenomenon of hydrometeorological field measurements, when using the eddy-covariance method, which can be ascribed to an underestimation of the turbulent fluxes. Several approaches have been proposed in order to adjust the measured fluxes for this apparent systematic error. However, there are uncertainties about partitioning of the energy balance residual between the sensible and latent heat flux and whether such a correction should be applied on 30-minute data or longer time scales. The data for this study originate from two grassland sites in southern Germany, where measurements frommore » weighable lysimeters are available as reference. The adjusted evapotranspiration rates are also compared with joint energy and water balance simulations using a physically-based distributed hydrological model. We evaluate two adjustment methods: the first one preserves the Bowen ratio and the correction factor is determined on a daily basis. The second one attributes a smaller portion of the residual energy to the latent heat flux than to the sensible heat flux for closing the energy balance for every 30-minute flux integration interval. Both methods lead to an improved agreement of the eddy-covariance based fluxes with the independent lysimeter estimates and the physically-based model simulations. The first method results in a better comparability of evapotranspiration rates, and the second method leads to a smaller overall bias. These results are similar between both sites despite considerable differences in terrain complexity and grassland management. Moreover, we found that a daily adjustment factor leads to less scatter than a complete partitioning of the residual for every half-hour time interval. Lastly, the vertical temperature gradient in the surface layer and friction velocity were identified as important predictors for a potential future parameterization of the energy balance residual.« less
Fatoye, Francis; Haigh, Carol
2016-05-01
To examine the cost-effectiveness of semi-rigid ankle brace to facilitate return to work following first-time acute ankle sprains. Economic evaluation based on cost-utility analysis. Ankle sprains are a source of morbidity and absenteeism from work, accounting for 15-20% of all sports injuries. Semi-rigid ankle brace and taping are functional treatment interventions used by Musculoskeletal Physiotherapists and Nurses to facilitate return to work following acute ankle sprains. A decision model analysis, based on cost-utility analysis from the perspective of National Health Service was used. The primary outcomes measure was incremental cost-effectiveness ratio, based on quality-adjusted life years. Costs and quality of life data were derived from published literature, while model clinical probabilities were sourced from Musculoskeletal Physiotherapists. The cost and quality adjusted life years gained using semi-rigid ankle brace was £184 and 0.72 respectively. However, the cost and quality adjusted life years gained following taping was £155 and 0.61 respectively. The incremental cost-effectiveness ratio for the semi-rigid brace was £263 per quality adjusted life year. Probabilistic sensitivity analysis showed that ankle brace provided the highest net-benefit, hence the preferred option. Taping is a cheaper intervention compared with ankle brace to facilitate return to work following first-time ankle sprains. However, the incremental cost-effectiveness ratio observed for ankle brace was less than the National Institute for Health and Care Excellence threshold and the intervention had a higher net-benefit, suggesting that it is a cost-effective intervention. Decision-makers may be willing to pay £263 for an additional gain in quality adjusted life year. The findings of this economic evaluation provide justification for the use of semi-rigid ankle brace by Musculoskeletal Physiotherapists and Nurses to facilitate return to work in individuals with first-time ankle sprains. © 2016 John Wiley & Sons Ltd.
Ghorbani, Nima; Watson, P J
2005-06-01
This study examined the incremental validity of Hardiness scales in a sample of Iranian managers. Along with measures of the Five Factor Model and of Organizational and Psychological Adjustment, Hardiness scales were administered to 159 male managers (M age = 39.9, SD = 7.5) who had worked in their organizations for 7.9 yr. (SD=5.4). Hardiness predicted greater Job Satisfaction, higher Organization-based Self-esteem, and perceptions of the work environment as being less stressful and constraining. Hardiness also correlated positively with Assertiveness, Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness and negatively with Depression, Anxiety, Perceived Stress, Chance External Control, and a Powerful Others External Control. Evidence of incremental validity was obtained when the Hardiness scales supplemented the Five Factor Model in predicting organizational and psychological adjustment. These data documented the incremental validity of the Hardiness scales in a non-Western sample and thus confirmed once again that Hardiness has a relevance that extends beyond the culture in which it was developed.
Risk adjustment and the fear of markets: the case of Belgium.
Schokkaert, E; Van de Voorde, C
2000-02-01
In Belgium the management and administration of the compulsory and universal health insurance is left to a limited number of non-governmental non-profit sickness funds. Since 1995 these sickness funds are partially financed in a prospective way. The risk adjustment scheme is based on a regression model to explain medical expenditures for different social groups. Medical supply is taken out of the formula to construct risk-adjusted capitation payments. The risk-adjustment formula still leaves scope for risk selection. At the same time, the sickness funds were not given the instruments to exert a real influence on expenditures and the health insurance market has not been opened for new entrants. As a consequence, Belgium runs the danger of ending up in a situation with little incentives for efficiency and considerable profits from cream skimming.
Bernstein, Richard H
2007-01-01
"Care management" purposefully obscures the distinctions between disease and case management and stresses their common features: action in the present to prevent adverse future outcomes and costs. It includes identifying a high-need population by referrals, screening, or data analysis, assessing those likely to benefit from interventions, intervening, evaluating the intervention, and adjusting interventions when needed. High-risk individuals can be identified using at least 9 techniques, from referrals and questionnaires to retrospective claims analysis and predictive models. Other than referrals, software based on the risk-adjustment methodology that we have adapted can incorporate all these methodologies. Because the risk adjustment employs extensive case mix and severity adjustment, it provides care managers with 3 innovative ways to identify not only high-risk individuals but also high-opportunity cases.
ERIC Educational Resources Information Center
Vos, Hans J.
1994-01-01
Describes the construction of a model of computer-assisted instruction using a qualitative block diagram based on general systems theory (GST) as a framework. Subject matter representation is discussed, and appendices include system variables and system equations of the GST model, as well as an example of developing flexible courseware. (Contains…
ERIC Educational Resources Information Center
Sableski, Mary-Kate; Arnold, Jackie Marshall
2017-01-01
Catholic elementary and secondary schools across the country recently adopted standards reflective of the Common Core State Standards to align instruction with state and national guidelines requiring the revision of curriculum and the adjustment of instruction to meet the new standards from an ideological model. This article describes a…
A Statistical Model for Misreported Binary Outcomes in Clustered RCTs of Education Interventions
ERIC Educational Resources Information Center
Schochet, Peter Z.
2013-01-01
In education randomized control trials (RCTs), the misreporting of student outcome data could lead to biased estimates of average treatment effects (ATEs) and their standard errors. This article discusses a statistical model that adjusts for misreported binary outcomes for two-level, school-based RCTs, where it is assumed that misreporting could…
Applicability of the Dual-Factor Model of Mental Health for College Students
ERIC Educational Resources Information Center
Eklund, Katie; Dowdy, Erin; Jones, Camille; Furlong, Michael
2011-01-01
This study explores the utility of a dual-factor model of mental health in which the concepts of mental illness and mental wellness are integrated. Life satisfaction, emotional symptoms, personal adjustment, and clinical symptoms were assessed with a sample of 240 college students. Participants were organized into four groups based on levels of…
Antidepressant treatment and suicide attempts and self-inflicted injury in children and adolescents.
Gibbons, Robert D; Coca Perraillon, Marcelo; Hur, Kwan; Conti, Rena M; Valuck, Robert J; Brent, David A
2015-02-01
In the 2004, FDA placed a black box warning on antidepressants for risk of suicidal thoughts and behavior in children and adolescents. The purpose of this paper is to examine the risk of suicide attempt and self-inflicted injury in depressed children ages 5-17 treated with antidepressants in two large observational datasets taking account time-varying confounding. We analyzed two large US medical claims databases (MarketScan and LifeLink) containing 221,028 youth (ages 5-17) with new episodes of depression, with and without antidepressant treatment during the period of 2004-2009. Subjects were followed for up to 180 days. Marginal structural models were used to adjust for time-dependent confounding. For both datasets, significantly increased risk of suicide attempts and self-inflicted injury were seen during antidepressant treatment episodes in the unadjusted and simple covariate adjusted analyses. Marginal structural models revealed that the majority of the association is produced by dynamic confounding in the treatment selection process; estimated odds ratios were close to 1.0 consistent with the unadjusted and simple covariate adjusted association being a product of chance alone. Our analysis suggests antidepressant treatment selection is a product of both static and dynamic patient characteristics. Lack of adjustment for treatment selection based on dynamic patient characteristics can lead to the appearance of an association between antidepressant treatment and suicide attempts and self-inflicted injury among youths in unadjusted and simple covariate adjusted analyses. Marginal structural models can be used to adjust for static and dynamic treatment selection processes such as that likely encountered in observational studies of associations between antidepressant treatment selection, suicide and related behaviors in youth. Copyright © 2014 John Wiley & Sons, Ltd.
Through the Lens of Culture: Quality of Life Among Latina Breast Cancer Survivors
Graves, Kristi D.; Jensen, Roxanne E.; Cañar, Janet; Perret-Gentil, Monique; Leventhal, Kara-Grace; Gonzalez, Florencia; Caicedo, Larisa; Jandorf, Lina; Kelly, Scott; Mandelblatt, Jeanne
2012-01-01
BACKGROUND Latinas have lower quality of life than Caucasian cancer survivors but we know little about factors associated with quality of life in this growing population. METHODS Bilingual staff conducted interviews with a national cross-sectional sample of 264 Latina breast cancer survivors. Quality of life was measured using the Functional Assessment of Cancer Therapy-Breast (FACT-B). Regression models evaluated associations between culture, social and medical context and overall quality of life and its subdomains. RESULTS Latina survivors were 1-5 years post-diagnosis and reported a lower mean quality of life score compared to other published reports of non-Latina survivors (M=105; SD=19.4 on the FACT-B). Culturally-based feelings of breast cancer-related stigma and shame were consistently related to lower overall quality of life and lower well-being in each quality of life domain. Social and medical contextual factors were independently related to quality of life; together cultural, social and medical context factors uniquely accounted for 62% of the explained model variance of overall quality of life (Adjusted R2=0.53, P<.001). Similar relationships were seen for quality of life subdomains in which cultural, social and medical contextual variables independently contributed to the overall variance of each final model: physical well-being (Adjusted R2=0.23, P <.001), social well-being (Adjusted R2=0.51, P<.001), emotional well-being (Adjusted R2=0.28, P<.001), functional well-being (Adjusted R2=0.41, P<.001) and additional breast concerns (Adjusted R2=0.40, P<.001). CONCLUSIONS Efforts to improve Latinas’ survivorship experiences should consider cultural, social and medical contextual factors to close existing quality of life gaps between Latinas and other survivors. PMID:23085764
High-precision method of binocular camera calibration with a distortion model.
Li, Weimin; Shan, Siyu; Liu, Hui
2017-03-10
A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.
Cranioplasty prosthesis manufacturing based on reverse engineering technology
Chrzan, Robert; Urbanik, Andrzej; Karbowski, Krzysztof; Moskała, Marek; Polak, Jarosław; Pyrich, Marek
2012-01-01
Summary Background Most patients with large focal skull bone loss after craniectomy are referred for cranioplasty. Reverse engineering is a technology which creates a computer-aided design (CAD) model of a real structure. Rapid prototyping is a technology which produces physical objects from virtual CAD models. The aim of this study was to assess the clinical usefulness of these technologies in cranioplasty prosthesis manufacturing. Material/Methods CT was performed on 19 patients with focal skull bone loss after craniectomy, using a dedicated protocol. A material model of skull deficit was produced using computer numerical control (CNC) milling, and individually pre-operatively adjusted polypropylene-polyester prosthesis was prepared. In a control group of 20 patients a prosthesis was manually adjusted to each patient by a neurosurgeon during surgery, without using CT-based reverse engineering/rapid prototyping. In each case, the prosthesis was implanted into the patient. The mean operating times in both groups were compared. Results In the group of patients with reverse engineering/rapid prototyping-based cranioplasty, the mean operating time was shorter (120.3 min) compared to that in the control group (136.5 min). The neurosurgeons found the new technology particularly useful in more complicated bone deficits with different curvatures in various planes. Conclusions Reverse engineering and rapid prototyping may reduce the time needed for cranioplasty neurosurgery and improve the prosthesis fitting. Such technologies may utilize data obtained by commonly used spiral CT scanners. The manufacturing of individually adjusted prostheses should be commonly used in patients planned for cranioplasty with synthetic material. PMID:22207125
Satellite-based Flood Modeling Using TRMM-based Rainfall Products
Harris, Amanda; Rahman, Sayma; Hossain, Faisal; Yarborough, Lance; Bagtzoglou, Amvrossios C.; Easson, Greg
2007-01-01
Increasingly available and a virtually uninterrupted supply of satellite-estimated rainfall data is gradually becoming a cost-effective source of input for flood prediction under a variety of circumstances. However, most real-time and quasi-global satellite rainfall products are currently available at spatial scales ranging from 0.25° to 0.50° and hence, are considered somewhat coarse for dynamic hydrologic modeling of basin-scale flood events. This study assesses the question: what are the hydrologic implications of uncertainty of satellite rainfall data at the coarse scale? We investigated this question on the 970 km2 Upper Cumberland river basin of Kentucky. The satellite rainfall product assessed was NASA's Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) product called 3B41RT that is available in pseudo real time with a latency of 6-10 hours. We observed that bias adjustment of satellite rainfall data can improve application in flood prediction to some extent with the trade-off of more false alarms in peak flow. However, a more rational and regime-based adjustment procedure needs to be identified before the use of satellite data can be institutionalized among flood modelers. PMID:28903302
Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu
2017-09-01
This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.
Development of a resource allocation formula for substance misuse treatment services.
Jones, Andrew; Hayhurst, Karen P; Whittaker, Will; Mason, Thomas; Sutton, Matt
2017-11-23
Funding for substance misuse services comprises one-third of Public Health spend in England. The current allocation formula contains adjustments for actual activity, performance and need, proxied by the Standardized Mortality Ratio for under-75s (SMR < 75). Additional measures, such as deprivation, may better identify differential service need. We developed an age-standardized and an age-stratified model (over-18s, under-18s), with the outcome of expected/actual cost at postal sector/Local Authority level. A third, person-based model incorporated predictors of costs at the individual level. Each model incorporated both needs and supply variables, with the relative effects of their inclusion assessed. Mean estimated annual cost (2013/14) per English Local Authority area was £5 032 802 (sd: 3 951 158). Costs for drug misuse treatment represented the majority (83%) of costs. Models achieved adjusted R-squared values of 0.522 (age-standardized), 0.533 (age-stratified over-18s), 0.232 (age-stratified under-18s) and 0.470 (person-based). Improvements can be made to the existing resource allocation formulae to better reflect population need. The person-based model permits inclusion of a range of needs variables, in addition to strong predictors of cost based on the receipt of treatment in the previous year. Adoption of this revised person-based formula for substance misuse would shift resources towards more deprived areas. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Tramadol for noncancer pain and the risk of hyponatremia.
Fournier, Jean-Pascal; Yin, Hui; Nessim, Sharon J; Montastruc, Jean-Louis; Azoulay, Laurent
2015-04-01
Case reports have signaled a possible association between tramadol, a weak opioid analgesic, and hyponatremia. The objective of this study was to determine whether the use of tramadol is associated with an increased risk of hyponatremia, when compared with codeine. Using the UK Clinical Practice Research Datalink and Hospital Episodes Statistics database, a population-based cohort of 332,880 patients initiating tramadol or codeine was assembled from 1998 through 2012. Cox proportional hazards models were used to estimate hazard ratios (HRs) with 95% confidence intervals (CIs) of hospitalization for hyponatremia associated with the use of tramadol, compared with codeine, in the first 30 days after initiation. A similar analysis was conducted within a highly restricted sub-cohort, which additionally excluded patients with any serum sodium level abnormality in the year before cohort entry. All models were adjusted for propensity score quintiles. The incidence rates of hospitalization for hyponatremia were 4.6 (95% CI, 2.4-8.0) and 1.9 (95% CI, 1.4-2.5) per 10,000 person-months for tramadol and codeine users, respectively. In the adjusted model, the use of tramadol was associated with a 2-fold increased risk of hospitalization for hyponatremia, compared with codeine (adjusted HR 2.05; 95% CI, 1.08-3.86). In the highly restricted sub-cohort, the use of tramadol was associated with an over 3-fold increased risk of hospitalization for hyponatremia, compared with codeine (adjusted HR 3.54; 95% CI, 1.32-9.54). In this first population-based study, the use of tramadol was associated with an increased risk of hyponatremia requiring hospitalization. Copyright © 2015 Elsevier Inc. All rights reserved.
Sköldberg, Filip; Olén, Ola; Ekbom, Anders; Schmidt, Peter T
2018-07-01
Appendicitis and acute diverticulitis share clinical features and are both influenced by genetic and environmental factors. Appendectomy has been positively associated with diverticular disease in hospital-based case-control studies. The aim of the present study was to investigate, in a population-based setting, whether appendectomy, with or without appendicitis, is associated with an altered risk of hospitalization with diverticular disease. This was a population-based case-control study. The study was based on national healthcare and population registers. We studied 41,988 individuals hospitalized between 2000 and 2010 with a first-time diagnosis of colonic diverticular disease and 413,115 matched control subjects. The association between appendectomy with or without appendicitis and diverticular disease was investigated by conditional logistic regression, including a model adjusting for hospital use. A total of 2813 cases (6.7%) and 19,037 controls (4.6%) had a previous record of appendectomy (appendectomy with acute appendicitis: adjusted OR = 1.31 (95% CI, 1.24-1.39); without appendicitis: adjusted OR = 1.30 (95% CI, 1.23-1.38)). Appendectomy was most strongly associated with an increased risk of diverticular disease within 1 year (with appendicitis: adjusted OR = 2.26 (95% CI, 1.61-3.16); without appendicitis: adjusted OR = 3.98 (95% CI, 2.71-5.83)), but the association was still present ≥20 years after appendectomy (with appendicitis: adjusted OR = 1.22 (95% CI, 1.12-1.32); without appendicitis: adjusted OR = 1.19 (95% CI, 1.10-1.28)). Detailed clinical information on the cases was not available. There were unmeasured potential confounders, such as smoking and dietary factors. The findings are consistent with a hypothesis of appendectomy causing an increased risk of diverticular disease, for example, by affecting the mucosal immune system or the gut microbiome. However, several other mechanisms may contribute to, or account for, the positive association, including a propensity for abdominal pain increasing the risk of both the exposure and the outcome. See Video Abstract at http://links.lww.com/DCR/A604.
Donta, Balaiah; Dasgupta, Anindita; Ghule, Mohan; Battala, Madhusudana; Nair, Saritha; Silverman, Jay G.; Jadhav, Arun; Palaye, Prajakta; Saggurti, Niranjan; Raj, Anita
2015-01-01
Objective Evidence has linked economic hardship with increased intimate partner violence (IPV) perpetration among males. However, less is known about how economic debt or gender norms related to men's roles in relationships or the household, which often underlie IPV perpetration, intersect in or may explain these associations. We assessed the intersection of economic debt, attitudes toward gender norms, and IPV perpetration among married men in India. Methods Data were from the evaluation of a family planning intervention among young married couples (n=1,081) in rural Maharashtra, India. Crude and adjusted logistic regression models for dichotomous outcome variables and linear regression models for continuous outcomes were used to examine debt in relation to husbands' attitudes toward gender-based norms (i.e., beliefs supporting IPV and beliefs regarding male dominance in relationships and the household), as well as sexual and physical IPV perpetration. Results Twenty percent of husbands reported debt. In adjusted linear regression models, debt was associated with husbands' attitudes supportive of IPV (b=0.015, p=0.004) and norms supporting male dominance in relationships and the household (b=0.006, p=0.003). In logistic regression models adjusted for relevant demographics, debt was associated with perpetration of physical IPV (adjusted odds ratio [AOR] = 1.4, 95% confidence interval [CI] 1.1, 1.9) and sexual IPV (AOR=1.6, 95% CI 1.1, 2.1) from husbands. These findings related to debt and relation to IPV were slightly attenuated when further adjusted for men's attitudes toward gender norms. Conclusion Findings suggest the need for combined gender equity and economic promotion interventions to address high levels of debt and related IPV reported among married couples in rural India. PMID:26556938
Reed, Elizabeth; Donta, Balaiah; Dasgupta, Anindita; Ghule, Mohan; Battala, Madhusudana; Nair, Saritha; Silverman, Jay G; Jadhav, Arun; Palaye, Prajakta; Saggurti, Niranjan; Raj, Anita
2015-01-01
Evidence has linked economic hardship with increased intimate partner violence (IPV) perpetration among males. However, less is known about how economic debt or gender norms related to men's roles in relationships or the household, which often underlie IPV perpetration, intersect in or may explain these associations. We assessed the intersection of economic debt, attitudes toward gender norms, and IPV perpetration among married men in India. Data were from the evaluation of a family planning intervention among young married couples (n=1,081) in rural Maharashtra, India. Crude and adjusted logistic regression models for dichotomous outcome variables and linear regression models for continuous outcomes were used to examine debt in relation to husbands' attitudes toward gender-based norms (i.e., beliefs supporting IPV and beliefs regarding male dominance in relationships and the household), as well as sexual and physical IPV perpetration. Twenty percent of husbands reported debt. In adjusted linear regression models, debt was associated with husbands' attitudes supportive of IPV (b=0.015, p=0.004) and norms supporting male dominance in relationships and the household (b=0.006, p=0.003). In logistic regression models adjusted for relevant demographics, debt was associated with perpetration of physical IPV (adjusted odds ratio [AOR] = 1.4, 95% confidence interval [CI] 1.1, 1.9) and sexual IPV (AOR=1.6, 95% CI 1.1, 2.1) from husbands. These findings related to debt and relation to IPV were slightly attenuated when further adjusted for men's attitudes toward gender norms. Findings suggest the need for combined gender equity and economic promotion interventions to address high levels of debt and related IPV reported among married couples in rural India.
Grey matter correlates of susceptibility to scams in community-dwelling older adults.
Duke Han, S; Boyle, Patricia A; Yu, Lei; Arfanakis, Konstantinos; James, Bryan D; Fleischman, Debra A; Bennett, David A
2016-06-01
Susceptibility to scams is a significant issue among older adults, even among those with intact cognition. Age-related changes in brain macrostructure may be associated with susceptibility to scams; however, this has yet to be explored. Based on previous work implicating frontal and temporal lobe functioning as important in decision making, we tested the hypothesis that susceptibility to scams is associated with smaller grey matter volume in frontal and temporal lobe regions in a large community-dwelling cohort of non-demented older adults. Participants (N = 327, mean age = 81.55, mean education = 15.30, 78.9 % female) completed a self-report measure used to assess susceptibility to scams and an MRI brain scan. Results indicated an inverse association between overall grey matter and susceptibility to scams in models adjusted for age, education, and sex; and in models further adjusted for cognitive function. No significant associations were observed for white matter, cerebrospinal fluid, or total brain volume. Models adjusted for age, education, and sex revealed seven clusters showing smaller grey matter in the right parahippocampal/hippocampal/fusiform, left middle temporal, left orbitofrontal, right ventromedial prefrontal, right middle temporal, right precuneus, and right dorsolateral prefrontal regions. In models further adjusted for cognitive function, results revealed three significant clusters showing smaller grey matter in the right parahippocampal/hippocampal/fusiform, right hippocampal, and right middle temporal regions. Lower grey matter concentration in specific brain regions may be associated with susceptibility to scams, even after adjusting for cognitive ability. Future research is needed to determine whether grey matter reductions in these regions may be a biomarker for susceptibility to scams in old age.
How to Make Our Models More Physically-based
NASA Astrophysics Data System (ADS)
Savenije, H. H. G.
2016-12-01
Models that are generally called "physically-based" unfortunately only have a partial view of the physical processes at play in hydrology. Although the coupled partial differential equations in these models reflect the water balance equations and the flow descriptors at laboratory scale, they miss essential characteristics of what determines the functioning of catchments. The most important active agent in catchments is the ecosystem (and sometimes people). What these agents do is manipulate the substrate in a way that it supports the essential functions of survival and productivity: infiltration of water, retention of moisture, mobilization and retention of nutrients, and drainage. Ecosystems do this in the most efficient way, in agreement with the landscape, and in response to climatic drivers. In brief, our hydrological system is alive and has a strong capacity to adjust to prevailing and changing circumstances. Although most physically based models take Newtonian theory at heart, as best they can, what they generally miss is Darwinian thinking on how an ecosystem evolves and adjusts its environment to maintain crucial hydrological functions. If this active agent is not reflected in our models, then they miss essential physics. Through a Darwinian approach, we can determine the root zone storage capacity of ecosystems, as a crucial component of hydrological models, determining the partitioning of fluxes and the conservation of moisture to bridge periods of drought. Another crucial element of physical systems is the evolution of drainage patterns, both on and below the surface. On the surface, such patterns facilitate infiltration or surface drainage with minimal erosion; in the unsaturated zone, patterns facilitate efficient replenishment of moisture deficits and preferential drainage when there is excess moisture; in the groundwater, patterns facilitate the efficient and gradual drainage of groundwater, resulting in linear reservoir recession. Models that do not incorporate these patterns are not physical. The parameters in the equations may be adjusted to compensate for the lake of patterns, but this involves scale-dependent calibration. In contrast to what is widely believed, relatively simple conceptual models can accommodate these physical processes accurately and very efficiently.
Fund allocation within Australian dental care: an innovative approach to output based funding.
Tennant, M; Carrello, C; Kruger, E
2005-12-01
Over the last 15 years in Australia the process of funding government health care has changed significantly. The development of dental funding models that transparently meet both the service delivery needs for data at the treatment level and policy makers' need for health condition data is critical to the continued integration of dentistry into the wider health system. This paper presents a model of fund allocation that provides a communication construct that addresses the needs of both policy makers and service providers. In this model, dental treatments (dental item numbers) have been grouped into eight broad dental health conditions. Within each dental health condition, a weighted average price is determined using the Department of Veterans Affairs' (DVA) fee schedule as the benchmark, adjusted for the mix of care. The model also adjusts for the efficiency differences between sectors providing government funded dental care. In summary, the price to be applied to a dental health condition category is determined by the weighted average DVA price adjusted by the sector efficiency. This model allows governments and dental service providers to develop funding agreements that both quantify and justify the treatment to be provided. Such a process facilitates the continued integration of dental care into the wider health system.
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Reduction of peak energy demand based on smart appliances energy consumption adjustment
NASA Astrophysics Data System (ADS)
Powroźnik, P.; Szulim, R.
2017-08-01
In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.
Kuk's Model Adjusted for Protection and Efficiency
ERIC Educational Resources Information Center
Su, Shu-Ching; Sedory, Stephen A.; Singh, Sarjinder
2015-01-01
In this article, we adjust the Kuk randomized response model for collecting information on a sensitive characteristic for increased protection and efficiency by making use of forced "yes" and forced "no" responses. We first describe Kuk's model and then the proposed adjustment to Kuk's model. Next, by means of a simulation…
25 CFR 1000.109 - How are self-governance base budgets adjusted?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How are self-governance base budgets adjusted? 1000.109... Self-Governance Base Budgets § 1000.109 How are self-governance base budgets adjusted? Self-governance base budgets must be adjusted as follows: (a) Congressional action. (1) Increases/decreases as a result...
25 CFR 1000.109 - How are self-governance base budgets adjusted?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How are self-governance base budgets adjusted? 1000.109... Self-Governance Base Budgets § 1000.109 How are self-governance base budgets adjusted? Self-governance base budgets must be adjusted as follows: (a) Congressional action. (1) Increases/decreases as a result...
25 CFR 1000.109 - How are self-governance base budgets adjusted?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false How are self-governance base budgets adjusted? 1000.109... Self-Governance Base Budgets § 1000.109 How are self-governance base budgets adjusted? Self-governance base budgets must be adjusted as follows: (a) Congressional action. (1) Increases/decreases as a result...
25 CFR 1000.109 - How are self-governance base budgets adjusted?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false How are self-governance base budgets adjusted? 1000.109... Self-Governance Base Budgets § 1000.109 How are self-governance base budgets adjusted? Self-governance base budgets must be adjusted as follows: (a) Congressional action. (1) Increases/decreases as a result...
Comín-Colet, Josep; Rubio-Rodríguez, Darío; Rubio-Terrés, Carlos; Enjuanes-Grau, Cristina; Gutzwiller, Florian S; Anker, Stefan D; Ponikowski, Piotr
2015-10-01
Treatment with ferric carboxymaltose improves symptoms, functional capacity, and quality of life in patients with chronic heart failure and iron deficiency. The aim of this study was to assess the cost-effectiveness of ferric carboxymaltose treatment vs no treatment in these patients. We used an economic model based on the Spanish National Health System, with a time horizon of 24 weeks. Patient characteristics and ferric carboxymaltose effectiveness (quality-adjusted life years) were taken from the Ferinject® Assessment in patients with IRon deficiency and chronic Heart Failure trial. Health care resource use and unit costs were taken either from Spanish sources, or from the above mentioned trial. In the base case analysis, patients treated with and without ferric carboxymaltose treatment acquired 0.335 and 0.298 quality-adjusted life years, respectively, representing a gain of 0.037 quality-adjusted life years for each treated patient. The cost per patient was €824.17 and €597.59, respectively, resulting in an additional cost of €226.58 for each treated patient. The cost of gaining 1 quality adjusted life year with ferric carboxymaltose was €6123.78. Sensitivity analyses confirmed the robustness of the model. The probability of ferric carboxymaltose being cost-effective (< €30 000 per quality-adjusted life year) and dominant (more effective and lower cost than no treatment) was 93.0% and 6.6%, respectively. Treatment with ferric carboxymaltose in patients with chronic heart failure and iron deficiency, with or without anemia, is cost-effective in Spain. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
[Risk-adjusted assessment: late-onset infection in neonates].
Gmyrek, Dieter; Koch, Rainer; Vogtmann, Christoph; Kaiser, Annette; Friedrich, Annette
2011-01-01
The weak point of the countrywide perinatal/neonatal quality surveillance is the ignorance of interhospital differences in the case mix of patients. As a result, this approach does not produce reliable benchmarking. The objective of this study was to adjust the result of the late-onset infection incidence of different hospitals according to their risk profile of patients by multivariate analysis. The perinatal/neonatal database of 41,055 newborns of the Saxonian quality surveillance from 1998 to 2004 was analysed. Based on 18 possible risk factors, a logistic regression model was used to develop a specific risk predictor for the quality indicator "late-onset infection". The developed risk predictor for the incidence of late-onset infection could be described by 4 of the 18 analysed risk factors, namely gestational age, admission from home, hypoxic ischemic encephalopathy and B-streptococcal infection. The AUC(ROC) value of this quality indicator was 83.3%, which demonstrates its reliability. The hospital ranking based on the adjusted risk assessment was very different from hospital rankings before this adjustment. The average correction of ranking position was 4.96 for 35 clinics. The application of the risk adjustment method proposed here allows for a more objective comparison of the incidence of the quality indicator "late onset infection" among different hospitals. Copyright © 2011. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Réveillet, Marion; Six, Delphine; Vincent, Christian; Rabatel, Antoine; Dumont, Marie; Lafaysse, Matthieu; Morin, Samuel; Vionnet, Vincent; Litt, Maxime
2018-04-01
This study focuses on simulations of the seasonal and annual surface mass balance (SMB) of Saint-Sorlin Glacier (French Alps) for the period 1996-2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS) measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.
Chun, Heeran; Khang, Young-Ho; Kim, Il-Ho; Cho, Sung-Il
2008-09-01
This study examines and explains the gender disparity in health despite rapid modernization in South Korea where the social structure is still based on traditional gender relations. A nationally representative sample of 2897 men and 3286 women aged 25-64 from the 2001 Korean National Health and Nutrition Examination Survey was analyzed. Health indicators included self rated health and chronic disease. Age-adjusted prevalence was computed according to a gender and odds ratios (OR) derived from logistic regression. Percentage changes in OR by inclusion of determinant variables (socio-structural, psychosocial, and behavioral) into the base logistic regression model were used to estimate the contributions to the gender gap in two morbidity measures. Results showed a substantial female excess in ill-health in both measures, revealing an increasing disparity in the older age group. Group-specific age-adjusted prevalence of ill-health showed an inverse relationship to socioeconomic position. When adjusting for each determinant, employment status, education, and depression contributed the greatest to the gender gap. After adjusting for all suggested determinants, 78% for self rated health and 86% for chronic disease in excess OR could be explained. After stratifying for age, the full model provided a complete explanation for the female excess in chronic illness, but for self rated health a female excess was still evident for the younger age group. Socio-structural factors played a crucial role in accounting for female excess in ill-health. This result calls for greater attention to gender-based health inequality stemming from socio-structural determinants in South Korea. Cross-cultural validation studies are suggested for further discussion of the link between changing gender relations and the gender health gap in morbidity in diverse settings.
Coluccelli, Nicola
2010-08-01
Modeling a real laser diode stack based on Zemax ray tracing software that operates in a nonsequential mode is reported. The implementation of the model is presented together with the geometric and optical parameters to be adjusted to calibrate the model and to match the simulated intensity irradiance profiles with the experimental profiles. The calibration of the model is based on a near-field and a far-field measurement. The validation of the model has been accomplished by comparing the simulated and experimental transverse irradiance profiles at different positions along the caustic formed by a lens. Spot sizes and waist location are predicted with a maximum error below 6%.
Balentine, Courtney J; Vanness, David J; Schneider, David F
2018-01-01
We evaluated whether diagnostic thyroidectomy for indeterminate thyroid nodules would be more cost-effective than genetic testing after including the costs of long-term surveillance. We used a Markov decision model to estimate the cost-effectiveness of thyroid lobectomy versus genetic testing (Afirma®) for evaluation of indeterminate (Bethesda 3-4) thyroid nodules. The base case was a 40-year-old woman with a 1-cm indeterminate nodule. Probabilities and estimates of utilities were obtained from the literature. Cost estimates were based on Medicare reimbursements with a 3% discount rate for costs and quality-adjusted life-years. During a 5-year period after the diagnosis of indeterminate thyroid nodules, lobectomy was less costly and more effective than Afirma® (lobectomy: $6,100; 4.50 quality-adjusted life- years vs Afirma®: $9,400; 4.47 quality-adjusted life-years). Only in 253 of 10,000 simulations (2.5%) did Afirma® show a net benefit at a cost-effectiveness threshold of $100,000 per quality- adjusted life-years. There was only a 0.3% probability of Afirma® being cost saving and a 14.9% probability of improving quality-adjusted life-years. Our base case estimate suggests that diagnostic lobectomy dominates genetic testing as a strategy for ruling out malignancy of indeterminate thyroid nodules. These results, however, were highly sensitive to estimates of utilities after lobectomy and living under surveillance after Afirma®. Published by Elsevier Inc.
2013-10-21
depend on the quality of allocating resources. This work uses a reliability model of system and environmental covariates incorporating information at...state space. Further, the use of condition variables allows for the direct modeling of maintenance impact with the assumption that a nominal value ... value ), the model in the application of aviation maintenance can provide a useful estimation of reliability at multiple levels. Adjusted survival
Heat balance model for a human body in the form of wet bulb globe temperature indices.
Sakoi, Tomonori; Mochida, Tohru; Kurazumi, Yoshihito; Kuwabara, Kohei; Horiba, Yosuke; Sawada, Shin-Ichi
2018-01-01
The purpose of this study is to expand the empirically derived wet bulb globe temperature (WBGT) index to a rational thermal index based on the heat balance for a human body. We derive the heat balance model in the same form as the WBGT for a human engaged in moderate intensity work with a metabolic heat production of 174W/m 2 while wearing typical vapor-permeable clothing under shady and sunny conditions. Two important relationships are revealed based on this derivation: (1) the natural wet bulb and black globe temperature coefficients in the WBGT coincide with the heat balance equation for a human body with a fixed skin wettedness of approximately 0.45 at a fixed skin temperature; and (2) the WBGT can be interpreted as the environmental potential to increase skin temperature rather than the heat storage rate of a human body. We propose an adjustment factor calculation method that supports the application of WBGT for humans dressed in various clothing types and working under various air velocity conditions. Concurrently, we note difficulties in adjusting the WBGT by using a single factor for humans wearing vapor-impermeable protective clothing. The WBGT for shady conditions does not need adjustment depending on the positive radiant field (i.e., when a radiant heat source exists), whereas that for the sunny condition requires adjustments because it underestimates heat stress, which may result in insufficient human protection measures. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tabuchi, Takahiro; Fukuhara, Hiroyuki; Iso, Hiroyasu
2012-09-01
Perceived discrimination has been shown to be associated with health. However, it is uncertain whether discrimination based on geographical place of residence (geographically-based discrimination), such as Buraku or Nishinari discrimination in Japan, is associated with health. We conducted a cross-sectional study (response rate = 52.3%) from February to March 2009 in a Buraku district of Nishinari ward in Osaka city, one of the most deprived areas in Japan. We implemented sex-stratified and education-stratified multivariate regression models to examine the association between geographically-based discrimination and two mental health outcomes (depressive symptoms and diagnosis of mental illness) with adjustment for age, socioeconomic status, social relationships and lifestyle factors. A total of 1994 persons aged 25-79 years (928 men and 1066 women) living in the district were analyzed. In the fully-adjusted model, perceived geographically-based discrimination was significantly associated with depressive symptoms and diagnosis of mental illness. It was more strongly associated among men or highly educated people than among women or among less educated people. The effect of geographically-based discrimination on mental health is independent of socioeconomic status, social relationship and lifestyle factors. Geographically-based discrimination may be one of the social determinants of mental health. Copyright © 2012. Published by Elsevier Ltd.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model
Patricia L. Andrews
2012-01-01
Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...
Cui, Xin-yue; Chen, Tian-jiao; Ma, Jun
2015-06-18
To study whether the socio-ecological model based on "student-school-family" three-level strategy is effective in obesity prevention. A total of 3 175 students aged 7 to 18 from 16 schools (4 urban primary schools, 4 rural primary schools, 4 urban secondary schools and 4 rural secondary schools, of which 2 intervention schools were selected, respectively) were recruited by stratified cluster sampling method. A three-month intervention using "student-school-family" socio-ecological model was conducted through health education and environment improvement. The intervention contents included knowledge on obesity, healthy diet and physical activities. Their anthropometric indexes were recorded. The intervention prevented obesity (OR=1.12, P<0.05), and was effective in waist circumference (WC) and waist-hip ratio (WHR) (adjusted difference=0.63, 0.02, P<0.05). WC and WHR were reduced in girls (adjusted difference=0.52 & 0.02, P<0.05), and obesity was prevented in girls (OR=1.18, P<0.05). WC and WHR were reduced in boys (adjusted difference=0.73, 0.01, P<0.05). WHR were reduced in urban areas (adjusted difference=0.01, P<0.05). WC and WHR were reduced (adjusted difference=1.05, 0.02, P<0.05) and obesity was prevented (OR=1.18, P<0.05) in rural areas. WHR were reduced (adjusted difference=0.01, P<0.05) and obesity was prevented (OR=1.21, P<0.05) in primary schools. WHR were reduced in secondary schools (adjusted difference=0.02, P<0.05).The intervention effect was better in girls than in boys, in rural areas than in urban areas, and in primary schools than in secondary schools. The overweight and obesity prevalence went down after the intervention (χ2=11.01, P<0.01). Intervention strategy is effective in central obesity indexes such as WC and WHR, and it can be used widely.
Modeling of a 5-cell direct methanol fuel cell using adaptive-network-based fuzzy inference systems
NASA Astrophysics Data System (ADS)
Wang, Rongrong; Qi, Liang; Xie, Xiaofeng; Ding, Qingqing; Li, Chunwen; Ma, ChenChi M.
The methanol concentrations, temperature and current were considered as inputs, the cell voltage was taken as output, and the performance of a direct methanol fuel cell (DMFC) was modeled by adaptive-network-based fuzzy inference systems (ANFIS). The artificial neural network (ANN) and polynomial-based models were selected to be compared with the ANFIS in respect of quality and accuracy. Based on the ANFIS model obtained, the characteristics of the DMFC were studied. The results show that temperature and methanol concentration greatly affect the performance of the DMFC. Within a restricted current range, the methanol concentration does not greatly affect the stack voltage. In order to obtain higher fuel utilization efficiency, the methanol concentrations and temperatures should be adjusted according to the load on the system.
McMillan, Matthew T; Soi, Sameer; Asbun, Horacio J; Ball, Chad G; Bassi, Claudio; Beane, Joal D; Behrman, Stephen W; Berger, Adam C; Bloomston, Mark; Callery, Mark P; Christein, John D; Dixon, Elijah; Drebin, Jeffrey A; Castillo, Carlos Fernandez-Del; Fisher, William E; Fong, Zhi Ven; House, Michael G; Hughes, Steven J; Kent, Tara S; Kunstman, John W; Malleo, Giuseppe; Miller, Benjamin C; Salem, Ronald R; Soares, Kevin; Valero, Vicente; Wolfgang, Christopher L; Vollmer, Charles M
2016-08-01
To evaluate surgical performance in pancreatoduodenectomy using clinically relevant postoperative pancreatic fistula (CR-POPF) occurrence as a quality indicator. Accurate assessment of surgeon and institutional performance requires (1) standardized definitions for the outcome of interest and (2) a comprehensive risk-adjustment process to control for differences in patient risk. This multinational, retrospective study of 4301 pancreatoduodenectomies involved 55 surgeons at 15 institutions. Risk for CR-POPF was assessed using the previously validated Fistula Risk Score, and pancreatic fistulas were stratified by International Study Group criteria. CR-POPF variability was evaluated and hierarchical regression analysis assessed individual surgeon and institutional performance. There was considerable variability in both CR-POPF risk and occurrence. Factors increasing the risk for CR-POPF development included increasing Fistula Risk Score (odds ratio 1.49 per point, P < 0.00001) and octreotide (odds ratio 3.30, P < 0.00001). When adjusting for risk, performance outliers were identified at the surgeon and institutional levels. Of the top 10 surgeons (≥15 cases) for nonrisk-adjusted performance, only 6 remained in this high-performing category following risk adjustment. This analysis of pancreatic fistulas following pancreatoduodenectomy demonstrates considerable variability in both the risk and occurrence of CR-POPF among surgeons and institutions. Disparities in patient risk between providers reinforce the need for comprehensive, risk-adjusted modeling when assessing performance based on procedure-specific complications. Furthermore, beyond inherent patient risk factors, surgical decision-making influences fistula outcomes.
Hajdarbegovic, E; Blom, H; Verkouteren, J A C; Hofman, A; Hollestein, L M; Nijsten, T
2016-07-01
Epidermal barrier impairment and an altered immune system in atopic dermatitis (AD) may predispose to ultraviolet-induced DNA damage. To study the association between AD and actinic keratosis (AK) in a population-based cross-sectional study. AD was defined by modified criteria of the U.K. working party's diagnostic criteria. AKs were diagnosed by physicians during a full-body skin examination, and keratinocyte cancers were identified via linkage to the national pathology database. The results were analysed in adjusted multivariable and multinomial models. A lower proportion of subjects with AD had AKs than those without AD: 16% vs. 24%, P = 0·002; unadjusted odds ratio (OR) 0·60, 95% confidence interval (CI) 0·42-0·83; adjusted OR 0·74, 95% CI 0·51-1·05; fully adjusted OR 0·69, 95% CI 0·47-1·07. In a multinomial model patients with AD were less likely to have ≥ 10 AKs (adjusted OR 0·28, 95% CI 0·09-0·90). No effect of AD on basal cell carcinoma or squamous cell carcinoma was found: adjusted OR 0·71, 95% CI 0·41-1·24 and adjusted OR 1·54, 95% CI 0·66-3·62, respectively. AD in community-dwelling patients is not associated with AK. © 2016 British Association of Dermatologists.
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cost-effectiveness of population based BRCA testing with varying Ashkenazi Jewish ancestry.
Manchanda, Ranjit; Patel, Shreeya; Antoniou, Antonis C; Levy-Lahad, Ephrat; Turnbull, Clare; Evans, D Gareth; Hopper, John L; Macinnis, Robert J; Menon, Usha; Jacobs, Ian; Legood, Rosa
2017-11-01
Population-based BRCA1/BRCA2 testing has been found to be cost-effective compared with family history-based testing in Ashkenazi-Jewish women were >30 years old with 4 Ashkenazi-Jewish grandparents. However, individuals may have 1, 2, or 3 Ashkenazi-Jewish grandparents, and cost-effectiveness data are lacking at these lower BRCA prevalence estimates. We present an updated cost-effectiveness analysis of population BRCA1/BRCA2 testing for women with 1, 2, and 3 Ashkenazi-Jewish grandparents. Decision analysis model. Lifetime costs and effects of population and family history-based testing were compared with the use of a decision analysis model. 56% BRCA carriers are missed by family history criteria alone. Analyses were conducted for United Kingdom and United States populations. Model parameters were obtained from the Genetic Cancer Prediction through Population Screening trial and published literature. Model parameters and BRCA population prevalence for individuals with 3, 2, or 1 Ashkenazi-Jewish grandparent were adjusted for the relative frequency of BRCA mutations in the Ashkenazi-Jewish and general populations. Incremental cost-effectiveness ratios were calculated for all Ashkenazi-Jewish grandparent scenarios. Costs, along with outcomes, were discounted at 3.5%. The time horizon of the analysis is "life-time," and perspective is "payer." Probabilistic sensitivity analysis evaluated model uncertainty. Population testing for BRCA mutations is cost-saving in Ashkenazi-Jewish women with 2, 3, or 4 grandparents (22-33 days life-gained) in the United Kingdom and 1, 2, 3, or 4 grandparents (12-26 days life-gained) in the United States populations, respectively. It is also extremely cost-effective in women in the United Kingdom with just 1 Ashkenazi-Jewish grandparent with an incremental cost-effectiveness ratio of £863 per quality-adjusted life-years and 15 days life gained. Results show that population-testing remains cost-effective at the £20,000-30000 per quality-adjusted life-years and $100,000 per quality-adjusted life-years willingness-to-pay thresholds for all 4 Ashkenazi-Jewish grandparent scenarios, with ≥95% simulations found to be cost-effective on probabilistic sensitivity analysis. Population-testing remains cost-effective in the absence of reduction in breast cancer risk from oophorectomy and at lower risk-reducing mastectomy (13%) or risk-reducing salpingo-oophorectomy (20%) rates. Population testing for BRCA mutations with varying levels of Ashkenazi-Jewish ancestry is cost-effective in the United Kingdom and the United States. These results support population testing in Ashkenazi-Jewish women with 1-4 Ashkenazi-Jewish grandparent ancestry. Copyright © 2017 Elsevier Inc. All rights reserved.
Confounder summary scores when comparing the effects of multiple drug exposures.
Cadarette, Suzanne M; Gagne, Joshua J; Solomon, Daniel H; Katz, Jeffrey N; Stürmer, Til
2010-01-01
Little information is available comparing methods to adjust for confounding when considering multiple drug exposures. We compared three analytic strategies to control for confounding based on measured variables: conventional multivariable, exposure propensity score (EPS), and disease risk score (DRS). Each method was applied to a dataset (2000-2006) recently used to examine the comparative effectiveness of four drugs. The relative effectiveness of risedronate, nasal calcitonin, and raloxifene in preventing non-vertebral fracture, were each compared to alendronate. EPSs were derived both by using multinomial logistic regression (single model EPS) and by three separate logistic regression models (separate model EPS). DRSs were derived and event rates compared using Cox proportional hazard models. DRSs derived among the entire cohort (full cohort DRS) was compared to DRSs derived only among the referent alendronate (unexposed cohort DRS). Less than 8% deviation from the base estimate (conventional multivariable) was observed applying single model EPS, separate model EPS or full cohort DRS. Applying the unexposed cohort DRS when background risk for fracture differed between comparison drug exposure cohorts resulted in -7 to + 13% deviation from our base estimate. With sufficient numbers of exposed and outcomes, either conventional multivariable, EPS or full cohort DRS may be used to adjust for confounding to compare the effects of multiple drug exposures. However, our data also suggest that unexposed cohort DRS may be problematic when background risks differ between referent and exposed groups. Further empirical and simulation studies will help to clarify the generalizability of our findings.
Chen, Chien-Min; Yang, Yao-Hsu; Chang, Chia-Hao; Chen, Pau-Chung
2017-12-01
To assess the long-term health outcomes of acute stroke survivors transferred to the rehabilitation ward. Long-term mortality rates of first-time stroke survivors during hospitalization were compared among the following sets of patients: patients transferred to the rehabilitation ward, patients receiving rehabilitation without being transferred to the rehabilitation ward, and patients receiving no rehabilitation. Retrospective cohort study. Patients (N = 11,419) with stroke from 2005 to 2008 were initially assessed for eligibility. After propensity score matching, 390 first-time stroke survivors were included. None. Cox proportional hazards regression model was used to assess differences in 5-year poststroke mortality rates. Based on adjusted hazard ratios (HRs), the patients receiving rehabilitation without being transferred to the rehabilitation ward (adjusted HR, 2.20; 95% confidence interval [CI], 1.36-3.57) and patients receiving no rehabilitation (adjusted HR, 4.00; 95% CI, 2.55-6.27) had significantly higher mortality risk than the patients transferred to the rehabilitation ward. Mortality rate of the stroke survivors was affected by age ≥65 years (compared with age <45y; adjusted HR, 3.62), being a man (adjusted HR, 1.49), having ischemic stroke (adjusted HR, 1.55), stroke severity (Stroke Severity Index [SSI] score≥20, compared with SSI score<10; adjusted HR, 2.68), and comorbidity (Charlson-Deyo Comorbidity Index [CCI] score≥3, compared with CCI score=0; adjusted HR, 4.23). First-time stroke survivors transferred to the rehabilitation ward had a 5-year mortality rate 2.2 times lower than those who received rehabilitation without transfer to the rehabilitation ward and 4 times lower than those who received no rehabilitation. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Bernard, Lori L.; Guarnaccia, Charles A.
2003-01-01
Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…
Reconciling GRACE and GPS estimates of long-term load deformation in southern Greenland
NASA Astrophysics Data System (ADS)
Wang, Song-Yun; Chen, J. L.; Wilson, Clark R.; Li, Jin; Hu, Xiaogong
2018-02-01
We examine vertical load deformation at four continuous Global Positioning System (GPS) sites in southern Greenland relative to Gravity Recovery and Climate Experiment (GRACE) predictions of vertical deformation over the period 2002-2016. With limited spatial resolution, GRACE predictions require adjustment before they can be compared with GPS height time series. Without adjustment, both GRACE spherical harmonic (SH) and mascon solutions predict significant vertical displacement rate differences relative to GPS. We use a scaling factor method to adjust GRACE results, based on a long-term mass rate model derived from GRACE measurements, glacial geography, and ice flow data. Adjusted GRACE estimates show significantly improved agreement with GPS, both in terms of long-term rates and interannual variations. A deceleration of mass loss is observed in southern Greenland since early 2013. The success at reconciling GPS and GRACE observations with a more detailed mass rate model demonstrates the high sensitivity to load distribution in regions surrounding GPS stations. Conversely, the value of GPS observations in constraining mass changes in surrounding regions is also demonstrated. In addition, our results are consistent with recent estimates of GIA uplift (˜4.4 mm yr-1) at the KULU site.
Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S
2018-06-01
In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.
THE IMPACT OF MEASURES OF SOCIOECONOMIC STATUS ON HOSPITAL PROFILING IN NEW YORK CITY
Blum, Alexander B.; Egorova, Natalia N.; Sosunov, Eugene A.; Gelijns, Annetine C.; DuPree, Erin; Moskowitz, Alan J.; Federman, Alex D.; Ascheim, Deborah D.; Keyhani, Salomeh
2014-01-01
Background Current 30-day readmission models used by the Center for Medicare and Medicaid Services for the purpose of hospital-level comparisons lack measures of socioeconomic status (SES). We examined whether the inclusion of a SES measure in 30-day congestive heart failure (CHF) readmission models changed hospital risk standardized readmission rates (RSRR) in New York City (NYC) hospitals. Methods and Results Using a Centers for Medicare & Medicaid Services (CMS)-like model we estimated 30-day hospital-level RSRR by adjusting for age, gender and comorbid conditions. Next, we examined how hospital RSRRs changed relative to the New York City mean with inclusion of the Agency for Healthcare Research and Quality (AHRQ) validated SES index score. In a secondary analysis, we examined whether inclusion of the AHRQ SES Index score in 30-day readmission models disproportionately impacted the RSRR of minority-serving hospitals. Higher AHRQ SES scores, indicators of higher socioeconomic status, were associated with lower odds, 0.99, of 30-day readmission (p< 0.019). The addition of the AHRQ SES index did not change the model’s C statistic (0.63). After adjustment for the AHRQ SES index, one hospital changed status from “worse than the NYC average” to “no different than the NYC average”. After adjustment for the AHRQ SES index, one NYC minority-serving hospital was re-classified from “worse” to “no different than average”. Conclusions While patients with higher SES were less likely to be admitted, the impact of SES on readmission was very small. In NYC, inclusion of the AHRQ SES score in a CMS based model did not impact hospital-level profiling based on 30-day readmission. PMID:24823956
Pietz, Kenneth; Petersen, Laura A
2007-04-01
To compare the ability of two diagnosis-based risk adjustment systems and health self-report to predict short- and long-term mortality. Data were obtained from the Department of Veterans Affairs (VA) administrative databases. The study population was 78,164 VA beneficiaries at eight medical centers during fiscal year (FY) 1998, 35,337 of whom completed an 36-Item Short Form Health Survey for veterans (SF-36V) survey. We tested the ability of Diagnostic Cost Groups (DCGs), Adjusted Clinical Groups (ACGs), SF-36V Physical Component score (PCS) and Mental Component Score (MCS), and eight SF-36V scales to predict 1- and 2-5 year all-cause mortality. The additional predictive value of adding PCS and MCS to ACGs and DCGs was also evaluated. Logistic regression models were compared using Akaike's information criterion, the c-statistic, and the Hosmer-Lemeshow test. The c-statistics for the eight scales combined with age and gender were 0.766 for 1-year mortality and 0.771 for 2-5-year mortality. For DCGs with age and gender the c-statistics for 1- and 2-5-year mortality were 0.778 and 0.771, respectively. Adding PCS and MCS to the DCG model increased the c-statistics to 0.798 for 1-year and 0.784 for 2-5-year mortality. The DCG model showed slightly better performance than the eight-scale model in predicting 1-year mortality, but the two models showed similar performance for 2-5-year mortality. Health self-report may add health risk information in addition to age, gender, and diagnosis for predicting longer-term mortality.
Pietz, Kenneth; Petersen, Laura A
2007-01-01
Objectives To compare the ability of two diagnosis-based risk adjustment systems and health self-report to predict short- and long-term mortality. Data Sources/Study Setting Data were obtained from the Department of Veterans Affairs (VA) administrative databases. The study population was 78,164 VA beneficiaries at eight medical centers during fiscal year (FY) 1998, 35,337 of whom completed an 36-Item Short Form Health Survey for veterans (SF-36V) survey. Study Design We tested the ability of Diagnostic Cost Groups (DCGs), Adjusted Clinical Groups (ACGs), SF-36V Physical Component score (PCS) and Mental Component Score (MCS), and eight SF-36V scales to predict 1- and 2–5 year all-cause mortality. The additional predictive value of adding PCS and MCS to ACGs and DCGs was also evaluated. Logistic regression models were compared using Akaike's information criterion, the c-statistic, and the Hosmer–Lemeshow test. Principal Findings The c-statistics for the eight scales combined with age and gender were 0.766 for 1-year mortality and 0.771 for 2–5-year mortality. For DCGs with age and gender the c-statistics for 1- and 2–5-year mortality were 0.778 and 0.771, respectively. Adding PCS and MCS to the DCG model increased the c-statistics to 0.798 for 1-year and 0.784 for 2–5-year mortality. Conclusions The DCG model showed slightly better performance than the eight-scale model in predicting 1-year mortality, but the two models showed similar performance for 2–5-year mortality. Health self-report may add health risk information in addition to age, gender, and diagnosis for predicting longer-term mortality. PMID:17362210
Katsube, Takayuki; Wajima, Toshihiro; Ishibashi, Toru; Arjona Ferreira, Juan Camilo; Echols, Roger
2017-01-01
Cefiderocol, a novel parenteral siderophore cephalosporin, exhibits potent efficacy against most Gram-negative bacteria, including carbapenem-resistant strains. Since cefiderocol is excreted primarily via the kidneys, this study was conducted to develop a population pharmacokinetics (PK) model to determine dose adjustment based on renal function. Population PK models were developed based on data for cefiderocol concentrations in plasma, urine, and dialysate with a nonlinear mixed-effects model approach. Monte-Carlo simulations were conducted to calculate the probability of target attainment (PTA) of fraction of time during the dosing interval where the free drug concentration in plasma exceeds the MIC (T f >MIC ) for an MIC range of 0.25 to 16 μg/ml. For the simulations, dose regimens were selected to compare cefiderocol exposure among groups with different levels of renal function. The developed models well described the PK of cefiderocol for each renal function group. A dose of 2 g every 8 h with 3-h infusions provided >90% PTA for 75% T f >MIC for an MIC of ≤4 μg/ml for patients with normal renal function, while a more frequent dose (every 6 h) could be used for patients with augmented renal function. A reduced dose and/or extended dosing interval was selected for patients with impaired renal function. A supplemental dose immediately after intermittent hemodialysis was proposed for patients requiring intermittent hemodialysis. The PK of cefiderocol could be adequately modeled, and the modeling-and-simulation approach suggested dose regimens based on renal function, ensuring drug exposure with adequate bactericidal effect. Copyright © 2016 American Society for Microbiology.
A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery.
Shirk, Steven D; Mitchell, Meghan B; Shaughnessy, Lynn W; Sherman, Janet C; Locascio, Joseph J; Weintraub, Sandra; Atri, Alireza
2011-11-11
With the recent publication of new criteria for the diagnosis of preclinical Alzheimer's disease (AD), there is a need for neuropsychological tools that take premorbid functioning into account in order to detect subtle cognitive decline. Using demographic adjustments is one method for increasing the sensitivity of commonly used measures. We sought to provide a useful online z-score calculator that yields estimates of percentile ranges and adjusts individual performance based on sex, age and/or education for each of the neuropsychological tests of the National Alzheimer's Coordinating Center Uniform Data Set (NACC, UDS). In addition, we aimed to provide an easily accessible method of creating norms for other clinical researchers for their own, unique data sets. Data from 3,268 clinically cognitively-normal older UDS subjects from a cohort reported by Weintraub and colleagues (2009) were included. For all neuropsychological tests, z-scores were estimated by subtracting the raw score from the predicted mean and then dividing this difference score by the root mean squared error term (RMSE) for a given linear regression model. For each neuropsychological test, an estimated z-score was calculated for any raw score based on five different models that adjust for the demographic predictors of SEX, AGE and EDUCATION, either concurrently, individually or without covariates. The interactive online calculator allows the entry of a raw score and provides five corresponding estimated z-scores based on predictions from each corresponding linear regression model. The calculator produces percentile ranks and graphical output. An interactive, regression-based, normative score online calculator was created to serve as an additional resource for UDS clinical researchers, especially in guiding interpretation of individual performances that appear to fall in borderline realms and may be of particular utility for operationalizing subtle cognitive impairment present according to the newly proposed criteria for Stage 3 preclinical Alzheimer's disease.
Ly, John; Sathananthan, Vidiya; Griffiths, Thomas; Kanjee, Zahir; Kenny, Avi; Gordon, Nicholas; Basu, Gaurab; Battistoli, Dale; Dorr, Lorenzo; Lorenzen, Breeanna; Thomson, Dana R; Waters, Ami; Moore, Uriah G; Roberts, Ruth; Smith, Wilmot L; Siedner, Mark J; Kraemer, John D
2016-08-01
The Ebola virus disease (EVD) epidemic has threatened access to basic health services through facility closures, resource diversion, and decreased demand due to community fear and distrust. While modeling studies have attempted to estimate the impact of these disruptions, no studies have yet utilized population-based survey data. We conducted a two-stage, cluster-sample household survey in Rivercess County, Liberia, in March-April 2015, which included a maternal and reproductive health module. We constructed a retrospective cohort of births beginning 4 y before the first day of survey administration (beginning March 24, 2011). We then fit logistic regression models to estimate associations between our primary outcome, facility-based delivery (FBD), and time period, defined as the pre-EVD period (March 24, 2011-June 14, 2014) or EVD period (June 15, 2014-April 13, 2015). We fit both univariable and multivariable models, adjusted for known predictors of facility delivery, accounting for clustering using linearized standard errors. To strengthen causal inference, we also conducted stratified analyses to assess changes in FBD by whether respondents believed that health facility attendance was an EVD risk factor. A total of 1,298 women from 941 households completed the survey. Median age at the time of survey was 29 y, and over 80% had a primary education or less. There were 686 births reported in the pre-EVD period and 212 in the EVD period. The unadjusted odds ratio of facility-based delivery in the EVD period was 0.66 (95% confidence interval [CI] 0.48-0.90, p-value = 0.010). Adjustment for potential confounders did not change the observed association, either in the principal model (adjusted odds ratio [AOR] = 0.70, 95%CI 0.50-0.98, p = 0.037) or a fully adjusted model (AOR = 0.69, 95%CI 0.50-0.97, p = 0.033). The association was robust in sensitivity analyses. The reduction in FBD during the EVD period was observed among those reporting a belief that health facilities are or may be a source of Ebola transmission (AOR = 0.59, 95%CI 0.36-0.97, p = 0.038), but not those without such a belief (AOR = 0.90, 95%CI 0.59-1.37, p = 0.612). Limitations include the possibility of FBD secular trends coincident with the EVD period, recall errors, and social desirability bias. We detected a 30% decreased odds of FBD after the start of EVD in a rural Liberian county with relatively few cases. Because health facilities never closed in Rivercess County, this estimate may under-approximate the effect seen in the most heavily affected areas. These are the first population-based survey data to show collateral disruptions to facility-based delivery caused by the West African EVD epidemic, and they reinforce the need to consider the full spectrum of implications caused by public health emergencies.
NASA Astrophysics Data System (ADS)
Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke
2014-05-01
Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.
Odonkor, Charles A; Schonberger, Robert B; Dai, Feng; Shelley, Kirk H; Silverman, David G; Barash, Paul G
2013-10-01
The primary aims of this study were to design prediction models based on a functional marker (preoperative gait speed) to predict readiness for home discharge time of 90 mins or less and to identify those at risk for unplanned admissions after elective ambulatory surgery. This prospective observational cohort study evaluated all patients scheduled for elective ambulatory surgery. Home discharge readiness and unplanned admissions were the primary outcomes. Independent variables included preoperative gait speed, heart rate, and total anesthesia time. The relationship between all predictors and each primary outcome was determined in separate multivariable logistic regression models. After adjustment for covariates, gait speed with adjusted odds ratio of 3.71 (95% confidence interval, 1.21-11.26), P = 0.02, was independently associated with early home discharge readiness of 90 mins or less. Importantly, gait speed dichotomized as greater or less than 1 m/sec predicted unplanned admissions, with odds ratio of 0.35 (95% confidence interval, 0.16-0.76, P = 0.008) for those with speeds 1 m/sec or greater in comparison with those with speeds less than 1 m/sec. In a separate model, history of cardiac surgery with adjusted odds ratio of 7.5 (95% confidence interval, 2.34-24.41; P = 0.001) was independently associated with unplanned admissions after elective ambulatory surgery, when other covariates were held constant. This study demonstrates the use of novel prediction models based on gait speed testing to predict early home discharge and to identify those patients at risk for unplanned admissions after elective ambulatory surgery.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Creating pharmacy staffing-to-demand models: predictive tools used at two institutions.
Krogh, Paul; Ernster, Jason; Knoer, Scott
2012-09-15
The creation and implementation of data-driven staffing-to-demand models at two institutions are described. Predictive workload tools provide a guideline for pharmacy managers to adjust staffing needs based on hospital volume metrics. At Abbott Northwestern Hospital, management worked with the department's staff and labor management committee to clearly outline the productivity monitoring system and the process for reducing hours. Reference charts describing the process for reducing hours and a form to track the hours of involuntary reductions for each employee were created to further enhance communication, explain the rationale behind the new process, and promote transparency. The University of Minnesota Medical Center-Fairview, found a strong correlation between measured pharmacy workload and an adjusted census formula. If the daily census and admission report indicate that the adjusted census will provide enough workload for the fully staffed department, no further action is needed. If the census report indicates the adjusted census is less than the breakeven point, staff members are asked to leave work, either voluntarily or involuntarily. The opposite holds true for days when the adjusted census is higher than the breakeven point, at which time additional staff are required to synchronize worked hours with predicted workload. Successful staffing-to- demand models were implemented in two hospital pharmacies. Financial savings, as indicated by decreased labor costs secondary to reduction of staffed shifts, were approximately $42,000 and $45,500 over a three-month period for Abbott Northwestern Hospital and the University of Minnesota Medical Center-Fairview, respectively. Maintenance of 100% productively allowed the departments to continue to replace vacant positions and avoid permanent staff reductions.
NASA Astrophysics Data System (ADS)
Bora, S. S.; Cotton, F.; Scherbaum, F.; Kuehn, N. M.
2016-12-01
Adjustment of median ground motion prediction equations (GMPEs) from data-rich (host) regions to data-poor regions (target) is one of major challenges that remains with the current practice of engineering seismology and seismic hazard analysis. Fourier spectral representation of ground motion provides a solution to address the problem of adjustment that is physically transparent and consistent with the concepts of linear system theory. Also, it provides a direct interface to appreciate the physically expected behavior of seismological parameters on ground motion. In the present study, we derive an empirical Fourier model for computing regionally adjustable response spectral ordinates based on random vibration theory (RVT) from shallow crustal earthquakes in active tectonic regions, following the approach of Bora et al. (2014, 2015). , For this purpose, we use an expanded NGA-West2 database with M 3.2—7.9 earthquakes at distances ranging from 0 to 300 km. A mixed-effects regression technique is employed to further explore various components of variability. The NGA-West2 database expanded over a wide magnitude range provides a better understanding (and constraint) of source scaling of ground motion. The large global volume of the database also allows investigating regional patterns in distance-dependent attenuation (i.e., geometrical spreading and inelastic attenuation) of ground motion as well as in the source parameters (e.g., magnitude and stress drop). Furthermore, event-wise variability and its correlation with stress parameter are investigated. Finally, application of the derived Fourier model in generating adjustable response spectra will be shown.
Rajmokan, M; Morton, A; Marquess, J; Playford, E G; Jones, M
2013-10-01
Making valid comparisons of antimicrobial utilization between hospitals requires risk adjustment for each hospital's case mix. Data on individual patients may be unavailable or difficult to process. Therefore, risk adjustment for antimicrobial usage frequently needs to be based on a hospital's services. This study evaluated such a strategy for hospital antimicrobial utilization. Data were obtained on five broad subclasses of antibiotics [carbapenems, β-lactam/β-lactamase inhibitor combinations (BLBLIs), fluoroquinolones, glycopeptides and third-generation cephalosporins] from the Queensland pharmacy database (MedTrx) for 21 acute public hospitals (2006-11). Eleven clinical services and a variable for hospitals from the tropical region were employed for risk adjustment. Multivariable regression models were used to identify risk and protective services for these antibiotics. Funnel plots were used to display hospitals' antimicrobial utilization. Total inpatient antibiotic utilization for these antibiotics increased from 130.6 defined daily doses (DDDs)/1000 patient-days in 2006 to 155.8 DDDs/1000 patient-days in 2011 (P < 0.0001). Except for third-generation cephalosporins, the average utilization rate was higher for intensive care, renal/nephrology, cardiac, burns/plastic surgery, neurosurgery, transplant and acute spinal services than for the respective reference group (no service). In addition, oncology, high-activity infectious disease and coronary care services were associated with higher utilization of carbapenems, BLBLIs and glycopeptides. Our model predicted antimicrobial utilization rates by hospital services. The funnel plots displayed hospital utilization data after adjustment for variation among the hospitals. However, the methodology needs to be validated in other populations, ideally using a larger group of hospitals.
On-demand Reporting of Risk-adjusted and Smoothed Rates for Quality Profiling in ACS NSQIP.
Cohen, Mark E; Liu, Yaoming; Huffman, Kristopher M; Ko, Clifford Y; Hall, Bruce L
2016-12-01
Surgical quality improvement depends on hospitals having accurate and timely information about comparative performance. Profiling accuracy is improved by risk adjustment and shrinkage adjustment to stabilize estimates. These adjustments are included in ACS NSQIP reports, where hospital odds ratios (OR) are estimated using hierarchical models built on contemporaneous data. However, the timeliness of feedback remains an issue. We describe an alternative, nonhierarchical approach, which yields risk- and shrinkage-adjusted rates. In contrast to our "Traditional" NSQIP method, this approach uses preexisting equations, built on historical data, which permits hospitals to have near immediate access to profiling results. We compared our traditional method to this new "on-demand" approach with respect to outlier determinations, kappa statistics, and correlations between logged OR and standardized rates, for 12 models (4 surgical groups by 3 outcomes). When both methods used the same contemporaneous data, there were similar numbers of hospital outliers and correlations between logged OR and standardized rates were high. However, larger differences were observed when the effect of contemporaneous versus historical data was added to differences in statistical methodology. The on-demand, nonhierarchical approach provides results similar to the traditional hierarchical method and offers immediacy, an "over-time" perspective, application to a broader range of models and data subsets, and reporting of more easily understood rates. Although the nonhierarchical method results are now available "on-demand" in a web-based application, the hierarchical approach has advantages, which support its continued periodic publication as the gold standard for hospital profiling in the program.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Plastic deformation treated as material flow through adjustable crystal lattice
NASA Astrophysics Data System (ADS)
Minakowski, P.; Hron, J.; Kratochvíl, J.; Kružík, M.; Málek, J.
2014-08-01
Looking at severe plastic deformation experiments, it seems that crystalline materials at yield behave as a special kind of anisotropic, highly viscous fluids flowing through an adjustable crystal lattice space. High viscosity provides a possibility to describe the flow as a quasi-static process, where inertial and other body forces can be neglected. The flow through the lattice space is restricted to preferred crystallographic planes and directions causing anisotropy. In the deformation process the lattice is strained and rotated. The proposed model is based on the rate form of the decomposition rule: the velocity gradient consists of the lattice velocity gradient and the sum of the velocity gradients corresponding to the slip rates of individual slip systems. The proposed crystal plasticity model allowing for large deformations is treated as the flow-adjusted boundary value problem. As a test example we analyze a plastic flow of an single crystal compressed in a channel die. We propose three step algorithm of finite element discretization for a numerical solution in the Arbitrary Lagrangian Eulerian (ALE) configuration.
Capacitance-Based Frequency Adjustment of Micro Piezoelectric Vibration Generator
Mao, Xinhua; He, Qing; Li, Hong; Chu, Dongliang
2014-01-01
Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method. PMID:25133237
Sakhnini, Ali; Saliba, Walid; Schwartz, Naama; Bisharat, Naiel
2017-06-01
Limited information is available about clinical predictors of in-hospital mortality in acute unselected medical admissions. Such information could assist medical decision-making.To develop a clinical model for predicting in-hospital mortality in unselected acute medical admissions and to test the impact of secondary conditions on hospital mortality.This is an analysis of the medical records of patients admitted to internal medicine wards at one university-affiliated hospital. Data obtained from the years 2013 to 2014 were used as a derivation dataset for creating a prediction model, while data from 2015 was used as a validation dataset to test the performance of the model. For each admission, a set of clinical and epidemiological variables was obtained. The main diagnosis at hospitalization was recorded, and all additional or secondary conditions that coexisted at hospital admission or that developed during hospital stay were considered secondary conditions.The derivation and validation datasets included 7268 and 7843 patients, respectively. The in-hospital mortality rate averaged 7.2%. The following variables entered the final model; age, body mass index, mean arterial pressure on admission, prior admission within 3 months, background morbidity of heart failure and active malignancy, and chronic use of statins and antiplatelet agents. The c-statistic (ROC-AUC) of the prediction model was 80.5% without adjustment for main or secondary conditions, 84.5%, with adjustment for the main diagnosis, and 89.5% with adjustment for the main diagnosis and secondary conditions. The accuracy of the predictive model reached 81% on the validation dataset.A prediction model based on clinical data with adjustment for secondary conditions exhibited a high degree of prediction accuracy. We provide a proof of concept that there is an added value for incorporating secondary conditions while predicting probabilities of in-hospital mortality. Further improvement of the model performance and validation in other cohorts are needed to aid hospitalists in predicting health outcomes.
Computational modeling of cardiovascular response to orthostatic stress
NASA Technical Reports Server (NTRS)
Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.
2002-01-01
The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.
An Improved Dynamic Model for the Respiratory Response to Exercise
Serna, Leidy Y.; Mañanas, Miguel A.; Hernández, Alher M.; Rabinovich, Roberto A.
2018-01-01
Respiratory system modeling has been extensively studied in steady-state conditions to simulate sleep disorders, to predict its behavior under ventilatory diseases or stimuli and to simulate its interaction with mechanical ventilation. Nevertheless, the studies focused on the instantaneous response are limited, which restricts its application in clinical practice. The aim of this study is double: firstly, to analyze both dynamic and static responses of two known respiratory models under exercise stimuli by using an incremental exercise stimulus sequence (to analyze the model responses when step inputs are applied) and experimental data (to assess prediction capability of each model). Secondly, to propose changes in the models' structures to improve their transient and stationary responses. The versatility of the resulting model vs. the other two is shown according to the ability to simulate ventilatory stimuli, like exercise, with a proper regulation of the arterial blood gases, suitable constant times and a better adjustment to experimental data. The proposed model adjusts the breathing pattern every respiratory cycle using an optimization criterion based on minimization of work of breathing through regulation of respiratory frequency. PMID:29467674
Domiciano, D S; Figueiredo, C P; Lopes, J B; Caparbo, V F; Takayama, L; Menezes, P R; Bonfa, E; Pereira, R M R
2013-02-01
The criteria most used for the definition of sarcopenia, those based on the ratio between the appendicular skeletal muscle mass (ASM) and the square of the height (h(2)) underestimate prevalence in overweight/obese people whereas another criteria consider ASM adjusted for total fat mass. We have shown that ASM adjusted for fat seems to be more appropriate for sarcopenia diagnosis. Since the prevalence of overweight and obesity is a growing public health issue, the aim of this study was to evaluate the prevalence and risk factors associated with sarcopenia, based on these two criteria, among older women. Six hundred eleven community-dwelling women were evaluated by specific questionnaire including clinical data. Body composition and bone mineral density were evaluated by dual X-ray absorptiometry. Logistic regression models were used to identify factors independently related to sarcopenia by ASM/h(2) and ASM adjusted for total fat mass criteria. The prevalence of overweight/obesity was high (74.3 %). The frequency of sarcopenia was lower using the criteria of ASM/h(2) (3.7 %) than ASM adjusted for fat (19.9 %) (P < 0.0001). We also note that less than 5 %(1/23) of sarcopenic women, according to ASM/h(2), had overweight/obesity, whereas 60 % (74/122) of sarcopenic women by ASM adjusted for fat had this complication. Using ASM/h(2), the associated factors observed in regression models were femoral neck T-score (OR = 1.90; 95 % CI 1.06-3.41; P = 0.03) and current alcohol intake (OR = 4.13, 95 % CI 1.18-14.45, P = 0.03). In contrast, we have identified that creatinine (OR = 0.21; 95 % CI 0.07-0.63; P = 0.005) and the White race (OR = 1.81; 95 % CI 1.15-2.84; P = 0.01) showed a significant association with sarcopenia using ASM adjusted for fat. In women with overweight/obesity, ASM adjusted for fat seems to be the more appropriate criteria for sarcopenia diagnosis. This finding has relevant public health implications, considering the high prevalence of overweight/obesity in older women.
Melting of genomic DNA: Predictive modeling by nonlinear lattice dynamics
NASA Astrophysics Data System (ADS)
Theodorakopoulos, Nikos
2010-08-01
The melting behavior of long, heterogeneous DNA chains is examined within the framework of the nonlinear lattice dynamics based Peyrard-Bishop-Dauxois (PBD) model. Data for the pBR322 plasmid and the complete T7 phage have been used to obtain model fits and determine parameter dependence on salt content. Melting curves predicted for the complete fd phage and the Y1 and Y2 fragments of the ϕX174 phage without any adjustable parameters are in good agreement with experiment. The calculated probabilities for single base-pair opening are consistent with values obtained from imino proton exchange experiments.
Modeling and simulating industrial land-use evolution in Shanghai, China
NASA Astrophysics Data System (ADS)
Qiu, Rongxu; Xu, Wei; Zhang, John; Staenz, Karl
2018-01-01
This study proposes a cellular automata-based Industrial and Residential Land Use Competition Model to simulate the dynamic spatial transformation of industrial land use in Shanghai, China. In the proposed model, land development activities in a city are delineated as competitions among different land-use types. The Hedonic Land Pricing Model is adopted to implement the competition framework. To improve simulation results, the Land Price Agglomeration Model was devised to simulate and adjust classic land price theory. A new evolutionary algorithm-based parameter estimation method was devised in place of traditional methods. Simulation results show that the proposed model closely resembles actual land transformation patterns and the model can not only simulate land development, but also redevelopment processes in metropolitan areas.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Wan, Fei; Mitra, Nandita
2018-03-01
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.
Collibee, Charlene; Furman, Wyndol
2015-01-01
The present study assessed a developmental task theory of romantic relationships by examining associations between romantic relationship qualities and adjustment across 9 years using a community based sample of 100 male and 100 female participants (M age Wave 1 = 15.83) in a Western U.S. city. Using multilevel modeling, the study examined the moderating effect of age on links between romantic relationship qualities and adjustment. Consistent with developmental task theory, high romantic quality was more associated with internalizing symptoms and dating satisfaction during young adulthood than adolescence. Romantic relationship qualities were also associated with externalizing symptoms and substance use, but the degree of association was consistent across ages. The findings underscore the significance of romantic relationship qualities across development. PMID:26283151
Boiger, Michael; Mesquita, Batja; Tsai, Annie Y; Markus, Hazel
2012-01-01
Emotions are for action, but action styles in emotional episodes may vary across cultural contexts. Based on culturally different models of agency, we expected that those who engage in European-American contexts will use more influence in emotional situations, while those who engage in East-Asian contexts will use more adjustment. European-American (N=60) and Asian-American (N=44) college students reported their action style during emotional episodes four times a day during a week. Asian Americans adjusted more than European Americans, whereas both used influence to a similar extent. These cultural differences in action style varied across types of emotion experienced. Moreover, influencing was associated with life satisfaction for European Americans, but not for Asian Americans.
Low-level Environmental Metals and Metalloids and Incident Pregnancy Loss
Buck Louis, Germaine M.; Smarr, Melissa M.; Sundaram, Rajeshwari; Steuerwald, Amy J.; Sapra, Katherine J.; Lu, Zhaohui; Parsons, Patrick J.
2017-01-01
Environmental exposure to metals and metalloids is associated with pregnancy loss in some but not all studies. We assessed arsenic, cadmium, mercury, and lead concentrations in 501 couples upon trying for pregnancy and followed them throughout pregnancy to estimate the risk of incident pregnancy loss. Using Cox proportional hazard models, we estimated hazard ratios (HR) and 95% confidence intervals (CIs) for pregnancy loss after covariate adjustment for each partner modeled individually then we jointly modeled both partners’ concentrations. Incidence of pregnancy loss was 28%. In individual partner models, the highest adjusted HRs were observed for female and male blood cadmium (HR=1.08; CI 0.81, 1.44; HR=1.09; 95% CI 0.84, 1.41, respectively). In couple based models, neither partner’s blood cadmium concentrations were associated with loss (HR=1.01; 95% CI 0.75, 1.37; HR=0.92; CI 0.68, 1.25, respectively). We observed no evidence of a significant relation between metal(loids) at environmentally relevant concentrations and pregnancy loss. PMID:28163209
Low-level environmental metals and metalloids and incident pregnancy loss.
Buck Louis, Germaine M; Smarr, Melissa M; Sundaram, Rajeshwari; Steuerwald, Amy J; Sapra, Katherine J; Lu, Zhaohui; Parsons, Patrick J
2017-04-01
Environmental exposure to metals and metalloids is associated with pregnancy loss in some but not all studies. We assessed arsenic, cadmium, mercury, and lead concentrations in 501 couples upon trying for pregnancy and followed them throughout pregnancy to estimate the risk of incident pregnancy loss. Using Cox proportional hazard models, we estimated hazard ratios (HR) and 95% confidence intervals (CIs) for pregnancy loss after covariate adjustment for each partner modeled individually then we jointly modeled both partners' concentrations. Incidence of pregnancy loss was 28%. In individual partner models, the highest adjusted HRs were observed for female and male blood cadmium (HR=1.08; CI 0.81, 1.44; HR=1.09; 95% CI 0.84, 1.41, respectively). In couple based models, neither partner's blood cadmium concentrations were associated with loss (HR=1.01; 95% CI 0.75, 1.37; HR=0.92; CI 0.68, 1.25, respectively). We observed no evidence of a significant relation between metal(loids) at these environmentally relevant concentrations and pregnancy loss. Published by Elsevier Inc.
Stimulating Scientific Reasoning with Drawing-Based Modeling
NASA Astrophysics Data System (ADS)
Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank
2018-02-01
We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.
Use of Prolonged Travel to Improve Pediatric Risk-Adjustment Models
Lorch, Scott A; Silber, Jeffrey H; Even-Shoshan, Orit; Millman, Andrea
2009-01-01
Objective To determine whether travel variables could explain previously reported differences in lengths of stay (LOS), readmission, or death at children's hospitals versus other hospital types. Data Source Hospital discharge data from Pennsylvania between 1996 and 1998. Study Design A population cohort of children aged 1–17 years with one of 19 common pediatric conditions was created (N=51,855). Regression models were constructed to determine difference for LOS, readmission, or death between children's hospitals and other types of hospitals after including five types of additional illness severity variables to a traditional risk-adjustment model. Principal Findings With the traditional risk-adjustment model, children traveling longer to children's or rural hospitals had longer adjusted LOS and higher readmission rates. Inclusion of either a geocoded travel time variable or a nongeocoded travel distance variable provided the largest reduction in adjusted LOS, adjusted readmission rates, and adjusted mortality rates for children's hospitals and rural hospitals compared with other types of hospitals. Conclusions Adding a travel variable to traditional severity adjustment models may improve the assessment of an individual hospital's pediatric care by reducing systematic differences between different types of hospitals. PMID:19207591
Satellite-based Flood Modeling Using TRMM-based Rainfall Products.
Harris, Amanda; Rahman, Sayma; Hossain, Faisal; Yarborough, Lance; Bagtzoglou, Amvrossios C; Easson, Greg
2007-12-20
Increasingly available and a virtually uninterrupted supply of satellite-estimatedrainfall data is gradually becoming a cost-effective source of input for flood predictionunder a variety of circumstances. However, most real-time and quasi-global satelliterainfall products are currently available at spatial scales ranging from 0.25 o to 0.50 o andhence, are considered somewhat coarse for dynamic hydrologic modeling of basin-scaleflood events. This study assesses the question: what are the hydrologic implications ofuncertainty of satellite rainfall data at the coarse scale? We investigated this question onthe 970 km² Upper Cumberland river basin of Kentucky. The satellite rainfall productassessed was NASA's Tropical Rainfall Measuring Mission (TRMM) Multi-satellitePrecipitation Analysis (TMPA) product called 3B41RT that is available in pseudo real timewith a latency of 6-10 hours. We observed that bias adjustment of satellite rainfall data canimprove application in flood prediction to some extent with the trade-off of more falsealarms in peak flow. However, a more rational and regime-based adjustment procedureneeds to be identified before the use of satellite data can be institutionalized among floodmodelers.
The Impact of a Telephone-Based Chronic Disease Management Program on Medical Expenditures.
Avery, George; Cook, David; Talens, Sheila
2016-06-01
The impact of a payer-provided telephone-based chronic disease management program on medical expenditures was evaluated using claims data from 126,245 members in employer self-ensured health plans (16,224 with a chronic disease in a group enrolled in the self-management program, 13,509 with a chronic disease in a group not participating in the program). A random effects regression model controlling for retrospective risk, age, sex, and diagnosis with a chronic disease was used to determine the impact of program participation on market-adjusted health care expenditures. Further confirmation of results was obtained by an ordinary least squares model comparing market- and risk-adjusted costs to the length of participation in the program. Participation in the program is associated with an average annual savings of $1157.91 per enrolled member in health care expenditures. Savings increase with the length of participation in the program. The results support the use of telephone-based patient self-management of chronic disease as a cost-effective means to reduce health care expenditures in the working-age population. (Population Health Management 2016;19:156-162).
NASA Astrophysics Data System (ADS)
Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin
2017-10-01
Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.
Epidemic spreading on random surfer networks with optimal interaction radius
NASA Astrophysics Data System (ADS)
Feng, Yun; Ding, Li; Hu, Ping
2018-03-01
In this paper, the optimal control problem of epidemic spreading on random surfer heterogeneous networks is considered. An epidemic spreading model is established according to the classification of individual's initial interaction radii. Then, a control strategy is proposed based on adjusting individual's interaction radii. The global stability of the disease free and endemic equilibrium of the model is investigated. We prove that an optimal solution exists for the optimal control problem and the explicit form of which is presented. Numerical simulations are conducted to verify the correctness of the theoretical results. It is proved that the optimal control strategy is effective to minimize the density of infected individuals and the cost associated with the adjustment of interaction radii.
Near Real-Time Event Detection & Prediction Using Intelligent Software Agents
2006-03-01
value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite
When product designers use perceptually based color tools
NASA Astrophysics Data System (ADS)
Bender, Walter R.
1998-07-01
Palette synthesis and analysis tools have been built based upon a model of color experience. This model adjusts formal compositional elements such as hue, value, chroma, and their contrasts, as well as size and proportion. Clothing and household product designers were given these tools to give guidance to their selection of seasonal palettes for use in production of the private-label merchandise of a large retail chain. The designers chose base palettes. Accents to these palettes were generated with and without the aid of the color tools. These palettes are compared by using perceptual metrics and interviews. The results are presented.
When product designers use perceptually based color tools
NASA Astrophysics Data System (ADS)
Bender, Walter R.
2001-01-01
Palette synthesis and analysis tools have been built based upon a model of color experience. This model adjusts formal compositional elements such as hue, value, chroma, and their contrasts, as well as size and proportion. Clothing and household product designers were given these tools to guide their selection of seasonal palettes in the production of the private-label merchandise in a large retail chain. The designers chose base palettes. Accents to these palettes were generated with and without the aid of the color tools. These palettes are compared by using perceptual metrics and interviews. The results are presented.
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.
Quantitative Precipitation Nowcasting: A Lagrangian Pixel-Based Approach
2012-01-01
Sorooshian, T. Bellerby, and G. Huffman, 2010: REFAME: Rain Estimation Using Forward-Adjusted Advection of Microwave Estimates. J. of Hydromet ., 11...precipitation forecasting using information from radar and Numerical Weather Prediction models. J. of Hydromet ., 4(6):1168-1180. Germann, U., and I
Physician Requirements-1990. For Cardiology.
ERIC Educational Resources Information Center
Tracy, Octavious; Birchette-Pierce, Cheryl
Professional requirements for physicians specializing in cardiology were estimated to assist policymakers in developing guidelines for graduate medical education. The determination of physician requirements was based on an adjusted needs rather than a demand or utilization model. For each illness, manpower requirements were modified by the…
Yang, Fan; Chen, Xinyin; Wang, Li
2014-01-01
The primary purpose of the study was to examine the moderating effects of academic achievement on relations between aggressive behavior and social and psychological adjustment in Chinese children. A sample of children (N = 1,171; 591 boys, 580 girls; initial M age = 9 years) in China participated in the study. Two waves of longitudinal data were collected in Grades 3 and 4 from multiple sources including peer nominations, teacher ratings, self-reports, and school records. The results indicated that the main effects of aggression on adjustment were more evident than those of adjustment on aggression. Moreover, aggression was negatively associated with later leadership status and positively associated with later peer victimization, mainly for high-achieving children. The results suggested that consistent with the resource-potentiating model, academic achievement served to enhance the positive development of children with low aggression. On the other hand, although the findings indicated fewer main effects of adjustment on aggression, loneliness, depression, and perceived social incompetence positively predicted later aggression for low-achieving, but not high-achieving, children, which suggested that consistent with the stress-buffering model, academic achievement protected children with psychological difficulties from developing aggressive behavior. The results indicate that academic achievement is involved in behavioral and socioemotional development in different manners in Chinese children. Researchers should consider an integrative approach based on children's behavioral, psychological, and academic functions in designing prevention and intervention programs.
A risk adjustment approach to estimating the burden of skin disease in the United States.
Lim, Henry W; Collins, Scott A B; Resneck, Jack S; Bolognia, Jean; Hodge, Julie A; Rohrer, Thomas A; Van Beek, Marta J; Margolis, David J; Sober, Arthur J; Weinstock, Martin A; Nerenz, David R; Begolka, Wendy Smith; Moyano, Jose V
2018-01-01
Direct insurance claims tabulation and risk adjustment statistical methods can be used to estimate health care costs associated with various diseases. In this third manuscript derived from the new national Burden of Skin Disease Report from the American Academy of Dermatology, a risk adjustment method that was based on modeling the average annual costs of individuals with or without specific diseases, and specifically tailored for 24 skin disease categories, was used to estimate the economic burden of skin disease. The results were compared with the claims tabulation method used in the first 2 parts of this project. The risk adjustment method estimated the direct health care costs of skin diseases to be $46 billion in 2013, approximately $15 billion less than estimates using claims tabulation. For individual skin diseases, the risk adjustment cost estimates ranged from 11% to 297% of those obtained using claims tabulation for the 10 most costly skin disease categories. Although either method may be used for purposes of estimating the costs of skin disease, the choice of method will affect the end result. These findings serve as an important reference for future discussions about the method chosen in health care payment models to estimate both the cost of skin disease and the potential cost impact of care changes. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Denham, Susanne A.; Bassett, Hideko H.; Zinsser, Katherine; Wyatt, Todd M.
2014-01-01
Starting on positive trajectories at school entry is important for children's later academic success. Using partial least squares, we sought to specify interrelations among all theory-based components of social-emotional learning (SEL), and their ability to predict later classroom adjustment and academic readiness in a modelling context.…
Impact of obesity and knee osteoarthritis on morbidity and mortality in older Americans.
Losina, Elena; Walensky, Rochelle P; Reichmann, William M; Holt, Holly L; Gerlovin, Hanna; Solomon, Daniel H; Jordan, Joanne M; Hunter, David J; Suter, Lisa G; Weinstein, Alexander M; Paltiel, A David; Katz, Jeffrey N
2011-02-15
Obesity and knee osteoarthritis are among the most frequent chronic conditions affecting Americans aged 50 to 84 years. To estimate quality-adjusted life-years lost due to obesity and knee osteoarthritis and health benefits of reducing obesity prevalence to levels observed a decade ago. The U.S. Census and obesity data from national data sources were combined with estimated prevalence of symptomatic knee osteoarthritis to assign persons aged 50 to 84 years to 4 subpopulations: nonobese without knee osteoarthritis (reference group), nonobese with knee osteoarthritis, obese without knee osteoarthritis, and obese with knee osteoarthritis. The Osteoarthritis Policy Model, a computer simulation model of knee osteoarthritis and obesity, was used to estimate quality-adjusted life-year losses due to knee osteoarthritis and obesity in comparison with the reference group. United States. U.S. population aged 50 to 84 years. Quality-adjusted life-years lost owing to knee osteoarthritis and obesity. Estimated total losses of per-person quality-adjusted life-years ranged from 1.857 in nonobese persons with knee osteoarthritis to 3.501 for persons affected by both conditions, resulting in a total of 86.0 million quality-adjusted life-years lost due to obesity, knee osteoarthritis, or both. Quality-adjusted life-years lost due to knee osteoarthritis and/or obesity represent 10% to 25% of the remaining quality-adjusted survival of persons aged 50 to 84 years. Hispanic and black women had disproportionately high losses. Model findings suggested that reversing obesity prevalence to levels seen 10 years ago would avert 178,071 cases of coronary heart disease, 889,872 cases of diabetes, and 111,206 total knee replacements. Such a reduction in obesity would increase the quantity of life by 6,318,030 years and improve life expectancy by 7,812,120 quality-adjusted years in U.S. adults aged 50 to 84 years. Comorbidity incidences were derived from prevalence estimates on the basis of life expectancy of the general population, potentially resulting in conservative underestimates. Calibration analyses were conducted to ensure comparability of model-based projections and data from external sources. The number of quality-adjusted life-years lost owing to knee osteoarthritis and obesity seems to be substantial, with black and Hispanic women experiencing disproportionate losses. Reducing mean body mass index to the levels observed a decade ago in this population would yield substantial health benefits. The National Institutes of Health and the Arthritis Foundation.
Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott
2016-01-01
Background The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals while creating financial incentives for quality improvement and fostering increased transparency. Limited information is available comparing hospital performance across healthcare business models. Study Design 2015 hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Results Of 3089 hospitals with Total Performance Scores (TPS), categories of representative healthcare business models included 104 Physician-owned Surgical Hospitals (POSH), 111 University HealthSystem Consortium (UHC), 14 US News & World Report Honor Roll (USNWR) Hospitals, 33 Kaiser Permanente, and 124 Pioneer Accountable Care Organization affiliated hospitals. Estimated mean TPS for POSH (64.4, 95% CI 61.83, 66.38) and Kaiser (60.79, 95% CI 56.56, 65.03) were significantly higher compared to all remaining hospitals while UHC members (36.8, 95% CI 34.51, 39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included POSH (mean 2.32, p<0.0001), USNWR honorees (mean 2.24, p 0.0140) and UHC members (mean =1.99, p<0.0001) while Kaiser Permanente hospitals had lower case mix value (mean =1.54, p<0.0001). Re-estimation of TPS did not change the original results after adjustment for differences in hospital case mix index. Conclusions The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals may guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. PMID:27502368
QCD Sum Rules and Models for Generalized Parton Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anatoly Radyushkin
2004-10-01
I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.
Moghavem, Nuriel; McDonald, Kathryn; Ratliff, John K; Hernandez-Boussard, Tina
2016-04-01
Patient Safety Indicators (PSIs) are administratively coded identifiers of potentially preventable adverse events. These indicators are used for multiple purposes, including benchmarking and quality improvement efforts. Baseline PSI evaluation in high-risk surgeries is fundamental to both purposes. Determine PSI rates and their impact on other outcomes in patients undergoing cranial neurosurgery compared with other surgeries. The Agency for Healthcare Research and Quality (AHRQ) PSI software was used to flag adverse events and determine risk-adjusted rates (RAR). Regression models were built to assess the association between PSIs and important patient outcomes. We identified cranial neurosurgeries based on International Classification of Diseases, Ninth Revision, Clinical Modification codes in California, Florida, New York, Arkansas, and Mississippi State Inpatient Databases, AHRQ, 2010-2011. PSI development, 30-day all-cause readmission, length of stay, hospital costs, and inpatient mortality. A total of 48,424 neurosurgical patients were identified. Procedure indication was strongly associated with PSI development. The neurosurgical population had significantly higher RAR of most PSIs evaluated compared with other surgical patients. Development of a PSI was strongly associated with increased length of stay and hospital cost and, in certain PSIs, increased inpatient mortality and 30-day readmission. In this population-based study, certain accountability measures proposed for use as value-based payment modifiers show higher RAR in neurosurgery patients compared with other surgical patients and were subsequently associated with poor outcomes. Our results indicate that for quality improvement efforts, the current AHRQ risk-adjustment models should be viewed in clinically meaningful stratified subgroups: for profiling and pay-for-performance applications, additional factors should be included in the risk-adjustment models. Further evaluation of PSIs in additional high-risk surgeries is needed to better inform the use of these metrics.
Ademi, Zanfina; Pfeil, Alena M; Hancock, Elizabeth; Trueman, David; Haroun, Rola Haroun; Deschaseaux, Celine; Schwenkglenks, Matthias
2017-11-29
We aimed to assess the cost effectiveness of sacubitril/valsartan compared to angiotensin-converting enzyme inhibitors (ACEIs) for the treatment of individuals with chronic heart failure and reduced-ejection fraction (HFrEF) from the perspective of the Swiss health care system. The cost-effectiveness analysis was implemented as a lifelong regression-based cohort model. We compared sacubitril/valsartan with enalapril in chronic heart failure patients with HFrEF and New York-Heart Association Functional Classification II-IV symptoms. Regression models based on the randomised clinical phase III PARADIGM-HF trials were used to predict events (all-cause mortality, hospitalisations, adverse events and quality of life) for each treatment strategy modelled over the lifetime horizon, with adjustments for patient characteristics. Unit costs were obtained from Swiss public sources for the year 2014, and costs and effects were discounted by 3%. The main outcome of interest was the incremental cost-effectiveness ratio (ICER), expressed as cost per quality-adjusted life years (QALYs) gained. Deterministic sensitivity analysis (DSA) and scenario and probabilistic sensitivity analysis (PSA) were performed. In the base-case analysis, the sacubitril/valsartan strategy showed a decrease in the number of hospitalisations (6.0% per year absolute reduction) and lifetime hospital costs by 8.0% (discounted) when compared with enalapril. Sacubitril/valsartan was predicted to improve overall and quality-adjusted survival by 0.50 years and 0.42 QALYs, respectively. Additional net-total costs were CHF 10 926. This led to an ICER of CHF 25 684. In PSA, the probability of sacubitril/valsartan being cost-effective at thresholds of CHF 50 000 was 99.0%. The treatment of HFrEF patients with sacubitril/valsartan versus enalapril is cost effective, if a willingness-to-pay threshold of CHF 50 000 per QALY gained ratio is assumed.
Helping military families through the deployment process: Strategies to support parenting
Gewirtz, Abigail H.; Erbes, Christopher R.; Polusny, Melissa A.; Forgatch, Marion S.; DeGarmo, David S.
2011-01-01
Recent studies have highlighted the impact of deployment on military families and children and the corresponding need for interventions to support them. Historically, however, little emphasis has been placed on family-based interventions in general, and parenting interventions in particular, with returning service members. This paper provides an overview of research on the associations between combat deployment, parental adjustment of service members and spouses, parenting impairments, and children’s adjustment problems, and provides a social interaction learning framework for research and practice to support parenting among military families affected by a parent’s deployment. We then describe the Parent Management Training-Oregon model (PMTO™), a family of interventions that improves parenting practices and child adjustment in highly stressed families, and briefly present work on an adaptation of PMTO for use in military families (After Deployment: Adaptive Parenting Tools, or ADAPT). The article concludes with PMTO-based recommendations for clinicians providing parenting support to military families. PMID:21841889
An integrated epidemiological and neural net model of the warfarin effect in managed care patients.
Jacobs, David M; Stefanovic, Filip; Wilton, Greg; Gomez-Caminero, Andres; Schentag, Jerome J
2017-01-01
Risk assessment tools are utilized to estimate the risk for stroke and need of anticoagulation therapy for patients with atrial fibrillation (AF). These risk stratification scores are limited by the information inputted into them and a reliance on time-independent variables. The objective of this study was to develop a time-dependent neural net model to identify AF populations at high risk of poor clinical outcomes and evaluate the discriminatory ability of the model in a managed care population. We performed a longitudinal, cohort study within a health-maintenance organization from 1997 to 2008. Participants were identified with incident AF irrespective of warfarin status and followed through their duration within the database. Three clinical outcome measures were evaluated including stroke, myocardial infarction, and hemorrhage. A neural net model was developed to identify patients at high risk of clinical events and defined to be an "enriched" patient. The model defines the enrichment based on the top 10 minimum mean square error output parameters that describe the three clinical outcomes. Cox proportional hazard models were utilized to evaluate the outcome measures. Among 285 patients, the mean age was 74±12 years with a mean follow-up of 4.3±2.6 years, and 154 (54%) were treated with warfarin. After propensity score adjustment, warfarin use was associated with a slightly increased risk of adverse outcomes (including stroke, myocardial infarction, and hemorrhage), though it did not attain statistical significance (adjusted hazard ratio [aHR] =1.22; 95% confidence interval [CI] 0.75-1.97; p =0.42). Within the neural net model, subjects at high risk of adverse outcomes were identified and labeled as "enriched." Following propensity score adjustment, enriched subjects were associated with an 81% higher risk of adverse outcomes as compared to nonenriched subjects (aHR=1.81; 95% CI, 1.15-2.88; p =0.01). Enrichment methodology improves the statistical discrimination of meaningful endpoints when used in a health records-based analysis.
NASA Astrophysics Data System (ADS)
Yu, C. W.; Hodges, B. R.; Liu, F.
2017-12-01
Development of continental-scale river network models creates challenges where the massive amount of boundary condition data encounters the sensitivity of a dynamic nu- merical model. The topographic data sets used to define the river channel characteristics may include either corrupt data or complex configurations that cause instabilities in a numerical solution of the Saint-Venant equations. For local-scale river models (e.g. HEC- RAS), modelers typically rely on past experience to make ad hoc boundary condition adjustments that ensure a stable solution - the proof of the adjustment is merely the sta- bility of the solution. To date, there do not exist any formal methodologies or automated procedures for a priori detecting/fixing boundary conditions that cause instabilities in a dynamic model. Formal methodologies for data screening and adjustment are a critical need for simulations with a large number of river reaches that draw their boundary con- dition data from a wide variety of sources. At the continental scale, we simply cannot assume that we will have access to river-channel cross-section data that has been ade- quately analyzed and processed. Herein, we argue that problematic boundary condition data for unsteady dynamic modeling can be identified through numerical modeling with the steady-state Saint-Venant equations. The fragility of numerical stability increases with the complexity of branching in river network system and instabilities (even in an unsteady solution) are typically triggered by the nonlinear advection term in Saint-Venant equations. It follows that the behavior of the simpler steady-state equations (which retain the nonlin- ear term) can be used to screen the boundary condition data for problematic regions. In this research, we propose a graph-theory based method to isolate the location of corrupted boundary condition data in a continental-scale river network and demonstrate its utility with a network of O(10^4) elements. Acknowledgement: This research is supported by the National Science Foundation un- der grant number CCF-1331610.
NASA Astrophysics Data System (ADS)
Rong, J. H.; Yi, J. H.
2010-10-01
In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.
Cost and schedule estimation study report
NASA Technical Reports Server (NTRS)
Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon
1993-01-01
This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.
Hao, Yanni; Lin, Peggy L; Xie, Jipan; Li, Nanxin; Koo, Valerie; Ohashi, Erika; Wu, Eric Q; Rogerio, Jaqueline
2015-08-01
Assessing real-world effectiveness of everolimus-based therapy (EVE) versus fulvestrant monotherapy (FUL) among postmenopausal women with hormone receptor-positive (HR(+))/HER2(-) metastatic breast cancer (mBC) after progression on nonsteroidal aromatase inhibitor (NSAI). Medical charts of community-based patients who received EVE or FUL for mBC after NSAI were examined. Progression-free survival (PFS), time on treatment and time to chemotherapy were compared using Kaplan-Meier curves and Cox proportional hazards models adjusting for line of therapy and patient characteristics. 192 patients received EVE and 156 FUL. After adjusting for patient characteristics, EVE was associated with significantly longer PFS than FUL (hazard ratio: 0.71; p = 0.045). EVE was associated with better PFS than FUL among NSAI-refractory postmenopausal HR(+)/HER2(-) mBC patients.
NASA Astrophysics Data System (ADS)
Wu, Feng
This dissertation contains three essays. In the first essay I use a volatility spillover model to find evidence of significant spillovers from crude oil prices to corn cash and futures prices, and that these spillover effects are time-varying. Results reveal that corn markets have become much more connected to crude oil markets after the introduction of the Energy Policy Act of 2005. Furthermore, crude oil prices transmit positive volatility spillovers into corn prices and movements in corn prices become more energy-driven as the ethanol gasoline consumption ratio increases. Based on this strong volatility link between crude oil and corn prices, a new cross hedging strategy for managing corn price risk using oil futures is examined and its performance studied. Results show that this cross hedging strategy provides only slightly better hedging performance compared to traditional hedging in corn futures markets alone. The implication is that hedging corn price risk in corn futures markets alone can still provide relatively satisfactory performance in the biofuel era. The second essay studies the spillover effect of biofuel policy on participation in the Conservation Reserve Program. Landowners' participation decisions are modeled using a real options framework. A novel aspect of the model is that it captures the structural change in agriculture caused by rising biofuel production. The resulting model is used to simulate the spillover effect under various conditions. In particular, I simulate how increased growth in agricultural returns, persistence of the biofuel production boom, and the volatility surrounding agricultural returns, affect conservation program participation decisions. Policy implications of these results are also discussed. The third essay proposes a methodology to construct a risk-adjusted implied volatility measure that removes the forecasting bias of the model-free implied volatility measure. The risk adjustment is based on a closed-form relationship between the expectation of future volatility and the model-free implied volatility assuming a jump-diffusion model. I use a GMM estimation framework to identify the key model parameters needed to apply the model. An empirical application to corn futures implied volatility is used to illustrate the methodology and demonstrate differences between my approach and the model-free implied volatility using observed corn option prices. I compare the risk-adjusted forecast with the unadjusted forecast as well as other alternatives; and results suggest that the risk-adjusted volatility is unbiased, informationally more efficient, and has superior predictive power over the alternatives considered.
Persistent opioid use following Cesarean delivery: patterns and predictors among opioid naïve women
Bateman, Brian T.; Franklin, Jessica M.; Bykov, Katsiaryna; Avorn, Jerry; Shrank, William H.; Brennan, Troyen A.; Landon, Joan E.; Rathmell, James P.; Huybrechts, Krista F.; Fischer, Michael A.; Choudhry, Niteesh K.
2016-01-01
Background The incidence of opioid-related death in women has increased five-fold over the past decade. For many women, their initial opioid exposure will occur in the setting of routine medical care. Approximately 1 in 3 deliveries in the U.S. is by Cesarean and opioids are commonly prescribed for post-surgical pain management. Objective The objective of this study was to determine the risk that opioid naïve women prescribed opioids after Cesarean delivery will subsequently become consistent prescription opioid users in the year following delivery, and to identify predictors for this behavior. Study Design We identified women in a database of commercial insurance beneficiaries who underwent Cesarean delivery and who were opioid-naïve in the year prior to delivery. To identify persistent users of opioids, we used trajectory models, which group together patients with similar patterns of medication filling during follow-up, based on patterns of opioid dispensing in the year following Cesarean delivery. We then constructed a multivariable logistic regression model to identify independent risk factors for membership in the persistent user group. Results 285 of 80,127 (0.36%, 95% confidence interval 0.32 to 0.40), opioid-naïve women became persistent opioid users (identified using trajectory models based on monthly patterns of opioid dispensing) following Cesarean delivery. Demographics and baseline comorbidity predicted such use with moderate discrimination (c statistic = 0.73). Significant predictors included a history of cocaine abuse (risk 7.41%; adjusted odds ratio 6.11, 95% confidence interval 1.03 to 36.31) and other illicit substance abuse (2.36%; adjusted odds ratio 2.78, 95% confidence interval 1.12 to 6.91), tobacco use (1.45%; adjusted odds ratio 3.04, 95% confidence interval 2.03 to 4.55), back pain (0.69%; adjusted odds ratio 1.74, 95% confidence interval 1.33 to 2.29), migraines (0.91%; adjusted odds ratio 2.14, 95% confidence interval 1.58 to 2.90), antidepressant use (1.34%; adjusted odds ratio 3.19, 95% confidence interval 2.41 to 4.23) and benzodiazepine use (1.99%; adjusted odds ratio 3.72, 95% confidence interval 2.64 to 5.26) in the year prior to Cesarean delivery. Conclusions A very small proportion of opioid-naïve women (approximately 1 in 300) become persistent prescription opioid users following Cesarean delivery. Pre-existing psychiatric comorbidity, certain pain conditions, and substance use/abuse conditions identifiable at the time of initial opioid prescribing were predictors of persistent use. PMID:26996986
A hybrid double-observer sightability model for aerial surveys
Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine
2013-01-01
Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.
Wentzel, Kathryn R
2002-01-01
This study examined the utility of parent socialization models for understanding teachers' influence on student adjustment in middle school. Teachers were assessed with respect to their modeling of motivation and to Baumrind's parenting dimensions of control, maturity demands, democratic communication, and nurturance. Student adjustment was defined in terms of their social and academic goals and interest in class, classroom behavior, and academic performance. Based on information from 452 sixth graders from two suburban middle schools, results of multiple regressions indicated that the five teaching dimensions explained significant amounts of variance in student motivation, social behavior, and achievement. High expectations (maturity demands) was a consistent positive predictor of students' goals and interests, and negative feedback (lack of nurturance) was the most consistent negative predictor of academic performance and social behavior. The role of motivation in mediating relations between teaching dimensions and social behavior and academic achievement also was examined; evidence for mediation was not found. Relations of teaching dimensions to student outcomes were the same for African American and European American students, and for boys and girls. The implications of parent socialization models for understanding effective teaching are discussed.
Beck, Andrea M; Robinson, John W; Carlson, Linda E
2013-11-01
Sexual dysfunction is the most significant long lasting effect of prostate cancer (PrCa) treatment. Despite the many medical treatments for erectile dysfunction, many couples report that they are dissatisfied with their sexual relationship and eventually cease sexual relations altogether. We sought to understand what distinguishes successful couples from those who are not successful in adjusting to changes in sexual function subsequent to PrCa treatment. Ten couples who maintained satisfying sexual intimacy after PrCa treatment and seven couples that did not were interviewed conjointly and individually. Interviews were transcribed and analyzed using grounded theory methodology. The theory that resulted suggests that individuals are motivated to engage in sex primarily because of physical pleasure and relational intimacy. The couples who valued sex primarily for relational intimacy were more likely to successfully adjust to changes in sexual function than those who primarily valued sex for physical pleasure. The attributes of acceptance, flexibility, and persistence helped sustain couples through the process of adjustment. Based on these findings, a new theory, the Physical Pleasure-Relational Intimacy Model of Sexual Motivation (PRISM) is presented. The results elucidate the main motives for engaging in sexual activity-physical pleasure and/or relational intimacy-as a determining factor in the successful maintenance of satisfying sexual intimacy after PrCa treatment. The PRISM model predicts that couples who place a greater value on sex for relational intimacy will better adjust to the sexual challenges after PrCa treatment than couples who place a lower value on sex for relational intimacy. Implications of the model for counselling are discussed. This model remains to be tested in future research.
Use of iDXA spine scans to evaluate total and visceral abdominal fat.
Bea, J W; Hsu, C-H; Blew, R M; Irving, A P; Caan, B J; Kwan, M L; Abraham, I; Going, S B
2018-01-01
Abdominal fat may be a better predictor than body mass index (BMI) for risk of metabolically-related diseases, such as diabetes, cardiovascular disease, and some cancers. We sought to validate the percent fat reported on dual energy X-ray absorptiometry (DXA) regional spine scans (spine fat fraction, SFF) against abdominal fat obtained from total body scans using the iDXA machine (General Electric, Madison, WI), as previously done on the Prodigy model. Total body scans and regional spine scans were completed on the same day (N = 50). In alignment with the Prodigy-based study, the following regions of interest (ROI) were assessed from total body scans and compared to the SFF from regional spine scans: total abdominal fat at (1) lumbar vertebrae L2-L4 and (2) L2-Iliac Crest (L2-IC); (3) total trunk fat; and (4) visceral fat in the android region. Separate linear regression models were used to predict each total body scan ROI from SFF; models were validated by bootstrapping. The sample was 84% female, a mean age of 38.5 ± 17.4 years, and mean BMI of 23.0 ± 3.8 kg/m 2 . The SFF, adjusted for BMI, predicted L2-L4 and L2-IC total abdominal fat (%; Adj. R 2 : 0.90) and total trunk fat (%; Adj. R 2 : 0.88) well; visceral fat (%) adjusted R 2 was 0.83. Linear regression models adjusted for additional participant characteristics resulted in similar adjusted R 2 values. This replication of the strong correlation between SFF and abdominal fat measures on the iDXA in a new population confirms the previous Prodigy model findings and improves generalizability. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Schmidt, P.; Lund, B.; Näslund, J.-O.; Fastook, J.
2014-05-01
In this study we compare a recent reconstruction of the Weichselian Ice Sheet as simulated by the University of Maine ice sheet model (UMISM) to two reconstructions commonly used in glacial isostatic adjustment (GIA) modelling: ICE-5G and ANU (Australian National University, also known as RSES). The UMISM reconstruction is carried out on a regional scale based on thermo-mechanical modelling, whereas ANU and ICE-5G are global models based on the sea level equation. The three models of the Weichselian Ice Sheet are compared directly in terms of ice volume, extent and thickness, as well as in terms of predicted glacial isostatic adjustment in Fennoscandia. The three reconstructions display significant differences. Whereas UMISM and ANU includes phases of pronounced advance and retreat prior to the last glacial maximum (LGM), the thickness and areal extent of the ICE-5G ice sheet is more or less constant up until the LGM. During the post-LGM deglaciation phase ANU and ICE-5G melt relatively uniformly over the entire ice sheet in contrast to UMISM, which melts preferentially from the edges, thus reflecting the fundamental difference in the reconstruction scheme. We find that all three reconstructions fit the present-day uplift rates over Fennoscandia equally well, albeit with different optimal earth model parameters. Given identical earth models, ICE-5G predicts the fastest present-day uplift rates, and ANU the slowest. Moreover, only for ANU can a unique best-fit model be determined. For UMISM and ICE-5G there is a range of earth models that can reproduce the present-day uplift rates equally well. This is understood from the higher present-day uplift rates predicted by ICE-5G and UMISM, which result in bifurcations in the best-fit upper- and lower-mantle viscosities. We study the areal distributions of present-day residual surface velocities in Fennoscandia and show that all three reconstructions generally over-predict velocities in southwestern Fennoscandia and that there are large differences in the fit to the observational data in Finland and northernmost Sweden and Norway. These difference may provide input to further enhancements of the ice sheet reconstructions.
Bibliography of In-House and Contract Reports, Supplement 18
1992-10-01
Transparent Conforming Overlays 46 TITLE REPORT NO. YEAR Development, Service Tests, and Production Model 1307 -TR 1953 Tests, Autofocusing Rectifier...Development, Test, Preparation, Delivery, and ETL- 1307 1982 Installation of Algorithms for Optimal Adjustment of Inertial Survey Data Developmental Optical...B: Terrain ETL- 0428 1986 and Object Modeling Recognition (March 13, 1985 - March 13, 1986) Knowledge-Based Vision Techniques - Task B: Terrain ETL
Utility-based early modulation of processing distracting stimulus information.
Wendt, Mike; Luna-Rodriguez, Aquiles; Jacobsen, Thomas
2014-12-10
Humans are selective information processors who efficiently prevent goal-inappropriate stimulus information to gain control over their actions. Nonetheless, stimuli, which are both unnecessary for solving a current task and liable to cue an incorrect response (i.e., "distractors"), frequently modulate task performance, even when consistently paired with a physical feature that makes them easily discernible from target stimuli. Current models of cognitive control assume adjustment of the processing of distractor information based on the overall distractor utility (e.g., predictive value regarding the appropriate response, likelihood to elicit conflict with target processing). Although studies on distractor interference have supported the notion of utility-based processing adjustment, previous evidence is inconclusive regarding the specificity of this adjustment for distractor information and the stage(s) of processing affected. To assess the processing of distractors during sensory-perceptual phases we applied EEG recording in a stimulus identification task, involving successive distractor-target presentation, and manipulated the overall distractor utility. Behavioral measures replicated previously found utility modulations of distractor interference. Crucially, distractor-evoked visual potentials (i.e., posterior N1) were more pronounced in high-utility than low-utility conditions. This effect generalized to distractors unrelated to the utility manipulation, providing evidence for item-unspecific adjustment of early distractor processing to the experienced utility of distractor information. Copyright © 2014 the authors 0270-6474/14/3416720-06$15.00/0.
Kinematic synthesis of adjustable robotic mechanisms
NASA Astrophysics Data System (ADS)
Chuenchom, Thatchai
1993-01-01
Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for identification of adjustable member was also developed. The analytical synthesis techniques developed in this dissertation were successfully implemented in a graphic-intensive user-friendly computer program. A physical prototype of a general purpose adjustable robotic mechanism has been constructed to serve as a proof-of-concept model.
Gray, Wayne D; Sims, Chris R; Fu, Wai-Tat; Schoelles, Michael J
2006-07-01
Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R's memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior. ((c) 2006 APA, all rights reserved).
Automatic orientation and 3D modelling from markerless rock art imagery
NASA Astrophysics Data System (ADS)
Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.
2013-02-01
This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.
NASA Astrophysics Data System (ADS)
Nord, Mark; Cafiero, Carlo; Viviani, Sara
2016-11-01
Statistical methods based on item response theory are applied to experiential food insecurity survey data from 147 countries, areas, and territories to assess data quality and develop methods to estimate national prevalence rates of moderate and severe food insecurity at equal levels of severity across countries. Data were collected from nationally representative samples of 1,000 adults in each country. A Rasch-model-based scale was estimated for each country, and data were assessed for consistency with model assumptions. A global reference scale was calculated based on item parameters from all countries. Each country's scale was adjusted to the global standard, allowing for up to 3 of the 8 scale items to be considered unique in that country if their deviance from the global standard exceeded a set tolerance. With very few exceptions, data from all countries were sufficiently consistent with model assumptions to constitute reasonably reliable measures of food insecurity and were adjustable to the global standard with fair confidence. National prevalence rates of moderate-or-severe food insecurity assessed over a 12-month recall period ranged from 3 percent to 92 percent. The correlations of national prevalence rates with national income, health, and well-being indicators provide external validation of the food security measure.
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
Raffield, Laura M; Cox, Amanda J; Criqui, Michael H; Hsu, Fang-Chi; Terry, James G; Xu, Jianzhao; Freedman, Barry I; Carr, J Jeffrey; Bowden, Donald W
2018-05-11
Coronary artery calcified plaque (CAC) is strongly predictive of cardiovascular disease (CVD) events and mortality, both in general populations and individuals with type 2 diabetes at high risk for CVD. CAC is typically reported as an Agatston score, which is weighted for increased plaque density. However, the role of CAC density in CVD risk prediction, independently and with CAC volume, remains unclear. We examined the role of CAC density in individuals with type 2 diabetes from the family-based Diabetes Heart Study and the African American-Diabetes Heart Study. CAC density was calculated as mass divided by volume, and associations with incident all-cause and CVD mortality [median follow-up 10.2 years European Americans (n = 902, n = 286 deceased), 5.2 years African Americans (n = 552, n = 93 deceased)] were examined using Cox proportional hazards models, independently and in models adjusted for CAC volume. In European Americans, CAC density, like Agatston score and volume, was consistently associated with increased risk of all-cause and CVD mortality (p ≤ 0.002) in models adjusted for age, sex, statin use, total cholesterol, HDL, systolic blood pressure, high blood pressure medication use, and current smoking. However, these associations were no longer significant when models were additionally adjusted for CAC volume. CAC density was not significantly associated with mortality, either alone or adjusted for CAC volume, in African Americans. CAC density is not associated with mortality independent from CAC volume in European Americans and African Americans with type 2 diabetes.
Patel, Rena C; Onono, Maricianah; Gandhi, Monica; Blat, Cinthia; Hagey, Jill; Shade, Starley B; Vittinghoff, Eric; Bukusi, Elizabeth A; Newmann, Sara J; Cohen, Craig R
2015-11-01
Concerns have been raised about efavirenz reducing the effectiveness of contraceptive implants. We aimed to establish whether pregnancy rates differ between HIV-positive women who use various contraceptive methods and either efavirenz-based or nevirapine-based antiretroviral therapy (ART) regimens. We did this retrospective cohort study of HIV-positive women aged 15-45 years enrolled in 19 HIV care facilities supported by Family AIDS Care and Education Services in western Kenya between Jan 1, 2011, and Dec 31, 2013. Our primary outcome was incident pregnancy diagnosed clinically. The primary exposure was a combination of contraceptive method and efavirenz-based or nevirapine-based ART regimen. We used Poisson models, adjusting for repeated measures, and demographic, behavioural, and clinical factors, to compare pregnancy rates among women receiving different contraceptive and ART combinations. 24,560 women contributed 37,635 years of follow-up with 3337 incident pregnancies. In women using implants, adjusted pregnancy incidence was 1.1 per 100 person-years (95% CI 0.72-1.5) for nevirapine-based ART users and 3.3 per 100 person-years (1.8-4.8) for efavirenz-based ART users (adjusted incidence rate ratio [IRR] 3.0, 95% CI 1.3-4.6). In women using depot medroxyprogesterone acetate, adjusted pregnancy incidence was 4.5 per 100 person-years (95% CI 3.7-5.2) for nevirapine-based ART users and 5.4 per 100 person-years (4.0-6.8) for efavirenz-based ART users (adjusted IRR 1.2, 95% CI 0.91-1.5). Women using other contraceptive methods, except for intrauterine devices and permanent methods, had 3.1-4.1 higher rates of pregnancy than did those using implants, with 1.6-2.8 higher rates in women using efavirenz-based ART. Although HIV-positive women using implants and efavirenz-based ART had a three-times higher risk of contraceptive failure than did those using nevirapine-based ART, these women still had lower contraceptive failure rates than did those receiving all other contraceptive methods except for intrauterine devices and permanent methods. Guidelines for contraceptive and ART combinations should balance the failure rates for each contraceptive method and ART regimen combination against the high effectiveness of implants. None. Copyright © 2015 Elsevier Ltd. All rights reserved.
Women's Work Conditions and Marital Adjustment in Two-Earner Couples: A Structural Model.
ERIC Educational Resources Information Center
Sears, Heather A.; Galambos, Nancy L.
1992-01-01
Evaluated structural model of women's work conditions, women's stress, and marital adjustment using path analysis. Findings from 86 2-earner couples with adolescents indicated support for spillover model in which women's work stress and global stress mediated link between their work conditions and their perceptions of marital adjustment.…
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
Zijlstra, Agnes; Zijlstra, Wiebren
2013-09-01
Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.
Modeling nearshore morphological evolution at seasonal scale
Walstra, D.-J.R.; Ruggiero, P.; Lesser, G.; Gelfenbaum, G.
2006-01-01
A process-based model is compared with field measurements to test and improve our ability to predict nearshore morphological change at seasonal time scales. The field experiment, along the dissipative beaches adjacent to Grays Harbor, Washington USA, successfully captured the transition between the high-energy erosive conditions of winter and the low-energy beach-building conditions typical of summer. The experiment documented shoreline progradation on the order of 20 m and as much as 175 m of onshore bar migration. Significant alongshore variability was observed in the morphological response of the sandbars over a 4 km reach of coast. A detailed sensitivity analysis suggests that the model results are more sensitive to adjusting the sediment transport associated with asymmetric oscillatory wave motions than to adjusting the transport due to mean currents. Initial results suggest that alongshore variations in the initial bathymetry are partially responsible for the observed alongshore variable morphological response during the experiment. Copyright ASCE 2006.
NASA Astrophysics Data System (ADS)
Pachauri, Rupendra Kumar; Chauhan, Yogesh K.
2017-02-01
This paper is a novel attempt to combine two important aspects of fuel cell (FC). First, it presents investigations on FC technology and its applications. A description of FC operating principles is followed by the comparative analysis of the present FC technologies together with the issues concerning various fuels. Second, this paper also proposes a model for the simulation and performances evaluation of a proton exchange membrane fuel cell (PEMFC) generation system. Furthermore, a MATLAB/Simulink-based dynamic model of PEMFC is developed and parameters of FC are so adjusted to emulate a commercially available PEMFC. The system results are obtained for the PEMFC-driven adjusted speed induction motor drive (ASIMD) system, normally used in electric vehicles and analysis is carried out for different operating conditions of FC and ASIMD system. The obtained results prove the validation of system concept and modelling.
Lampis, Jessica; Cataudella, Stefania; Agus, Mirian; Busonera, Alessandra; Skowron, Elizabeth A
2018-06-10
Bowen's multigenerational theory provides an account of how the internalization of experiences within the family of origin promotes development of the ability to maintain a distinct self whilst also making intimate connections with others. Differentiated people can maintain their I-position in intimate relationships. They can remain calm in conflictual relationships, resolve relational problems effectively, and reach compromises. Fusion with others, emotional cut-off, and emotional reactivity instead are common reactions to relational stress in undifferentiated people. Emotional reactivity is the tendency to react to stressors with irrational and intense emotional arousal. Fusion with others is an excessive emotional involvement in significant relationships, whilst emotional cut-off is the tendency to manage relationship anxiety through physical and emotional distance. This study is based on Bowen's theory, starting from the assumption that dyadic adjustment can be affected both by a member's differentiation of self (actor effect) and by his or her partner's differentiation of self (partner effect). We used the Actor-Partner Interdependence Model to study the relationship between differentiation of self and dyadic adjustment in a convenience sample of 137 heterosexual Italian couples (nonindependent, dyadic data). The couples completed the Differentiation of Self Inventory and the Dyadic Adjustment Scale. Men's dyadic adjustment depended only on their personal I-position, whereas women's dyadic adjustment was affected by their personal I-position and emotional cut-off as well as by their partner's I-position and emotional cut-off. The empirical and clinical implications of the results are discussed. © 2018 Family Process Institute.
Predictors of adjustment and growth in women with recurrent ovarian cancer.
Ponto, Julie Ann; Ellington, Lee; Mellon, Suzanne; Beck, Susan L
2010-05-01
To analyze predictors of adjustment and growth in women who had experienced recurrent ovarian cancer using components of the Resiliency Model of Family Stress, Adjustment, and Adaptation as a conceptual framework. Cross-sectional. Participants were recruited from national cancer advocacy groups. 60 married or partnered women with recurrent ovarian cancer. Participants completed an online or paper survey. Independent variables included demographic and illness variables and meaning of illness. Outcome variables were psychological adjustment and post-traumatic growth. A model of five predictor variables (younger age, fewer years in the relationship, poorer performance status, greater symptom distress, and more negative meaning) accounted for 64% of the variance in adjustment but did not predict post-traumatic growth. This study supports the use of a model of adjustment that includes demographic, illness, and appraisal variables for women with recurrent ovarian cancer. Symptom distress and poorer performance status were the most significant predictors of adjustment. Younger age and fewer years in the relationship also predicted poorer adjustment. Nurses have the knowledge and skills to influence the predictors of adjustment to recurrent ovarian cancer, particularly symptom distress and poor performance status. Nurses who recognize the predictors of poorer adjustment can anticipate problems and intervene to improve adjustment for women.
Curran, Eileen A; Dalman, Christina; Kearney, Patricia M; Kenny, Louise C; Cryan, John F; Dinan, Timothy G; Khashan, Ali S
2015-09-01
Because the rates of cesarean section (CS) are increasing worldwide, it is becoming increasingly important to understand the long-term effects that mode of delivery may have on child development. To investigate the association between obstetric mode of delivery and autism spectrum disorder (ASD). Perinatal factors and ASD diagnoses based on the International Classification of Diseases, Ninth Revision (ICD-9),and the International Statistical Classification of Diseases, 10th Revision (ICD-10),were identified from the Swedish Medical Birth Register and the Swedish National Patient Register. We conducted stratified Cox proportional hazards regression analysis to examine the effect of mode of delivery on ASD. We then used conditional logistic regression to perform a sibling design study, which consisted of sibling pairs discordant on ASD status. Analyses were adjusted for year of birth (ie, partially adjusted) and then fully adjusted for various perinatal and sociodemographic factors. The population-based cohort study consisted of all singleton live births in Sweden from January 1, 1982, through December 31, 2010. Children were followed up until first diagnosis of ASD, death, migration, or December 31, 2011 (end of study period), whichever came first. The full cohort consisted of 2,697,315 children and 28,290 cases of ASD. Sibling control analysis consisted of 13,411 sibling pairs. Obstetric mode of delivery defined as unassisted vaginal delivery (VD), assisted VD, elective CS, and emergency CS (defined by before or after onset of labor). The ASD status as defined using codes from the ICD-9 (code 299) and ICD-10 (code F84). In adjusted Cox proportional hazards regression analysis, elective CS (hazard ratio, 1.21; 95% CI, 1.15-1.27) and emergency CS (hazard ratio, 1.15; 95% CI, 1.10-1.20) were associated with ASD when compared with unassisted VD. In the sibling control analysis, elective CS was not associated with ASD in partially (odds ratio [OR], 0.97; 95% CI, 0.85-1.11) or fully adjusted (OR, 0.89; 95% CI, 0.76-1.04) models. Emergency CS was significantly associated with ASD in partially adjusted analysis (OR, 1.20; 95% CI, 1.06-1.36), but this effect disappeared in the fully adjusted model (OR, 0.97; 95% CI, 0.85-1.11). This study confirms previous findings that children born by CS are approximately 20% more likely to be diagnosed as having ASD. However, the association did not persist when using sibling controls, implying that this association is due to familial confounding by genetic and/or environmental factors.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Origami-inspired building block and parametric design for mechanical metamaterials
NASA Astrophysics Data System (ADS)
Jiang, Wei; Ma, Hua; Feng, Mingde; Yan, Leilei; Wang, Jiafu; Wang, Jun; Qu, Shaobo
2016-08-01
An origami-based building block of mechanical metamaterials is proposed and explained by introducing a mechanism model based on its geometry. According to our model, this origami mechanism supports response to uniaxial tension that depends on structure parameters. Hence, its mechanical properties can be tunable by adjusting the structure parameters. Experiments for poly lactic acid (PLA) samples were carried out, and the results are in good agreement with those of finite element analysis (FEA). This work may be useful for designing building blocks of mechanical metamaterials or other complex mechanical structures.
Norton, Giulia; McDonough, Christine M; Cabral, Howard; Shwartz, Michael; Burgess, James F
2015-05-15
Markov cost-utility model. To evaluate the cost-utility of cognitive behavioral therapy (CBT) for the treatment of persistent nonspecific low back pain (LBP) from the perspective of US commercial payers. CBT is widely deemed clinically effective for LBP treatment. The evidence is suggestive of cost-effectiveness. We constructed and validated a Markov intention-to-treat model to estimate the cost-utility of CBT, with 1-year and 10-year time horizons. We applied likelihood of improvement and utilities from a randomized controlled trial assessing CBT to treat LBP. The trial randomized subjects to treatment but subjects freely sought health care services. We derived the cost of equivalent rates and types of services from US commercial claims for LBP for a similar population. For the 10-year estimates, we derived recurrence rates from the literature. The base case included medical and pharmaceutical services and assumed gradual loss of skill in applying CBT techniques. Sensitivity analyses assessed the distribution of service utilization, utility values, and rate of LBP recurrence. We compared health plan designs. Results are based on 5000 iterations of each model and expressed as an incremental cost per quality-adjusted life-year. The incremental cost-utility of CBT was $7197 per quality-adjusted life-year in the first year and $5855 per quality-adjusted life-year over 10 years. The results are robust across numerous sensitivity analyses. No change of parameter estimate resulted in a difference of more than 7% from the base case for either time horizon. Including chiropractic and/or acupuncture care did not substantively affect cost-effectiveness. The model with medical but no pharmaceutical costs was more cost-effective ($5238 for 1 yr and $3849 for 10 yr). CBT is a cost-effective approach to manage chronic LBP among commercial health plans members. Cost-effectiveness is demonstrated for multiple plan designs. 2.
Specialty Payment Model Opportunities and Assessment
Mulcahy, Andrew W.; Chan, Chris; Hirshman, Samuel; Huckfeldt, Peter J.; Kofner, Aaron; Liu, Jodi L.; Lovejoy, Susan L.; Popescu, Ioana; Timbie, Justin W.; Hussey, Peter S.
2015-01-01
Abstract Gastroenterology and cardiology services are common and costly among Medicare beneficiaries. Episode-based payment, which aims to create incentives for high-quality, low-cost care, has been identified as a promising alternative payment model. This article describes research related to the design of episode-based payment models for ambulatory gastroenterology and cardiology services for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare and Medicaid Services (CMS). The authors analyzed Medicare claims data to describe the frequency and characteristics of gastroenterology and cardiology index procedures, the practices that delivered index procedures, and the patients that received index procedures. The results of these analyses can help inform CMS decisions about the definition of episodes in an episode-based payment model; payment adjustments for service setting, multiple procedures, or other factors; and eligibility for the payment model. PMID:28083363
Cost-effectiveness of a vaccine to prevent herpes zoster and postherpetic neuralgia in older adults.
Rothberg, Michael B; Virapongse, Anunta; Smith, Kenneth J
2007-05-15
A vaccine to prevent herpes zoster was recently approved by the United States Food and Drug Administration. We sought to determine the cost-effectiveness of this vaccine for different age groups. We constructed a cost-effectiveness model, based on the Shingles Prevention Study, to compare varicella zoster vaccination with usual care for healthy adults aged >60 years. Outcomes included cost in 2005 US dollars and quality-adjusted life expectancy. Costs and natural history data were drawn from the published literature; vaccine efficacy was assumed to persist for 10 years. For the base case analysis, compared with usual care, vaccination increased quality-adjusted life expectancy by 0.0007-0.0024 quality-adjusted life years per person, depending on age at vaccination and sex. These increases came almost exclusively as a result of prevention of acute pain associated with herpes zoster and postherpetic neuralgia. Vaccination also increased costs by $94-$135 per person, compared with no vaccination. The incremental cost-effectiveness ranged from $44,000 per quality-adjusted life year saved for a 70-year-old woman to $191,000 per quality-adjusted life year saved for an 80-year-old man. For the sensitivity analysis, the decision was most sensitive to vaccine cost. At a cost of $46 per dose, vaccination cost <$50,000 per quality-adjusted life year saved for all adults >60 years of age. Other variables related to the vaccine (duration, efficacy, and adverse effects), postherpetic neuralgia (incidence, duration, and utility), herpes zoster (incidence and severity), and the discount rate all affected the cost-effectiveness ratio by >20%. The cost-effectiveness of the varicella zoster vaccine varies substantially with patient age and often exceeds $100,000 per quality-adjusted life year saved. Age should be considered in vaccine recommendations.
Harrison, Sean; Tilling, Kate; Turner, Emma L; Lane, J Athene; Simpkin, Andrew; Davis, Michael; Donovan, Jenny; Hamdy, Freddie C; Neal, David E; Martin, Richard M
2016-12-01
Previous studies indicate a possible inverse relationship between prostate-specific antigen (PSA) and body mass index (BMI), and a positive relationship between PSA and age. We investigated the associations between age, BMI, PSA, and screen-detected prostate cancer to determine whether an age-BMI-adjusted PSA model would be clinically useful for detecting prostate cancer. Cross-sectional analysis nested within the UK ProtecT trial of treatments for localized cancer. Of 18,238 men aged 50-69 years, 9,457 men without screen-detected prostate cancer (controls) and 1,836 men with prostate cancer (cases) met inclusion criteria: no history of prostate cancer or diabetes; PSA < 10 ng/ml; BMI between 15 and 50 kg/m 2 . Multivariable linear regression models were used to investigate the relationship between log-PSA, age, and BMI in all men, controlling for prostate cancer status. In the 11,293 included men, the median PSA was 1.2 ng/ml (IQR: 0.7-2.6); mean age 61.7 years (SD 4.9); and mean BMI 26.8 kg/m 2 (SD 3.7). There were a 5.1% decrease in PSA per 5 kg/m 2 increase in BMI (95% CI 3.4-6.8) and a 13.6% increase in PSA per 5-year increase in age (95% CI 12.0-15.1). Interaction tests showed no evidence for different associations between age, BMI, and PSA in men above and below 3.0 ng/ml (all p for interaction >0.2). The age-BMI-adjusted PSA model performed as well as an age-adjusted model based on National Institute for Health and Care Excellence (NICE) guidelines at detecting prostate cancer. Age and BMI were associated with small changes in PSA. An age-BMI-adjusted PSA model is no more clinically useful for detecting prostate cancer than current NICE guidelines. Future studies looking at the effect of different variables on PSA, independent of their effect on prostate cancer, may improve the discrimination of PSA for prostate cancer.
Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools
NASA Astrophysics Data System (ADS)
Kollo, Karin; Spada, Giorgio; Vermeer, Martin
2013-04-01
Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every tested model the chi-square misfit for horizontal, vertical and three-dimensional velocity rates from the reference model was found (Milne, 2001). Finally, the best fitting models from GIA modelling were compared with rates obtained from GNSS data. Keywords: Fennoscandia, North America, land uplift, glacial isostatic adjustment, visco-elastic modelling, BIFROST. References Lidberg, M., Johannson, J., Scherneck, H.-G. and Milne, G. (2010). Recent results based on continuous GPS observations of the GIA process in Fennoscandia from BIFROST. Journal of Geodynamics, 50. pp. 8-18. Sella, G. F., Stein, S., Dixon, T. H., Craymer, M., James, T. S., Mazotti, S. and Dokka, R. K. (2007). Observations of glacial isostatic adjustment in "stable" North America with GPS. Geophysical Research Letters, 34, L02306. Spada, G., Stocchi, P. (2007). SELEN: A Fortran 90 program for solving the "sea-level equation". Computers & Geosciences, 33:538-562, 2007. Peltier, W. R. (2004). Global glacial isostasy and the surface of the ice-age Earth: The Ice-5G (VM2) model and GRACE. Annu. Rev. Earth Planet. Sci., 32:111-149, 2004. Fleming, K. and Lambeck, K. (2004). Constraints on the Greenland Ice Sheet since the Last Glacial Maximum from sea-level observations and glacial-rebound models. Quaternary Science Reviews 23 (2004), pp. 1053-1077. Milne, G. A. and Davis, J. L. and Mitrovica, J. X. and Scherneck, H.-G. and Johansson, J. M. and Vermeer, M. and Koivula, H. (2001). Space-geodetic constraints on glacial isostatic adjustment in Fennoscandia. Science 291 (2001), pp. 2381-2385.
Revisiting the Table 2 fallacy: A motivating example examining preeclampsia and preterm birth.
Bandoli, Gretchen; Palmsten, Kristin; Chambers, Christina D; Jelliffe-Pawlowski, Laura L; Baer, Rebecca J; Thompson, Caroline A
2018-05-21
A "Table Fallacy," as coined by Westreich and Greenland, reports multiple adjusted effect estimates from a single model. This practice, which remains common in published literature, can be problematic when different types of effect estimates are presented together in a single table. The purpose of this paper is to quantitatively illustrate this potential for misinterpretation with an example estimating the effects of preeclampsia on preterm birth. We analysed a retrospective population-based cohort of 2 963 888 singleton births in California between 2007 and 2012. We performed a modified Poisson regression to calculate the total effect of preeclampsia on the risk of PTB, adjusting for previous preterm birth. pregnancy alcohol abuse, maternal education, and maternal socio-demographic factors (Model 1). In subsequent models, we report the total effects of previous preterm birth, alcohol abuse, and education on the risk of PTB, comparing and contrasting the controlled direct effects, total effects, and confounded effect estimates, resulting from Model 1. The effect estimate for previous preterm birth (a controlled direct effect in Model 1) increased 10% when estimated as a total effect. The risk ratio for alcohol abuse, biased due to an uncontrolled confounder in Model 1, was reduced by 23% when adjusted for drug abuse. The risk ratio for maternal education, solely a predictor of the outcome, was essentially unchanged. Reporting multiple effect estimates from a single model may lead to misinterpretation and lack of reproducibility. This example highlights the need for careful consideration of the types of effects estimated in statistical models. © 2018 John Wiley & Sons Ltd.
Ray, Joshua A; Boye, Kristina S; Yurgin, Nicole; Valentine, William J; Roze, Stéphane; McKendrick, Jan; Tucker, Daniel M D; Foos, Volker; Palmer, Andrew J
2007-03-01
The aim of this study was to evaluate the long-term clinical and economic outcomes associated with exenatide or insulin glargine, added to oral therapy in individuals with type 2 diabetes inadequately controlled with combination oral agents in the UK setting. A published and validated computer simulation model of diabetes was used to project long-term complications, life expectancy, quality-adjusted life expectancy and direct medical costs. Probabilities of diabetes-related complications were derived from published sources. Treatment effects and patient characteristics were extracted from a recent randomised controlled trial comparing exenatide with insulin glargine. Simulations incorporated published quality of life utilities and UK-specific costs from 2004. Pharmacy costs for exenatide were based on 20, 40, 60, 80 and 100% of the US value (as no price for the UK was available at the time of analysis). Future costs and clinical benefits were discounted at 3.5% annually. Sensitivity analyses were performed. In the base-case analysis exenatide was associated with improvements in life expectancy of 0.057 years and in quality-adjusted life expectancy of 0.442 quality-adjusted life years (QALYs) versus insulin glargine. Long-term projections demonstrated that exenatide was associated with a lower cumulative incidence of most cardiovascular disease (CVD) complications and CVD-related death than insulin glargine. Using the range of cost values, evaluation results showed that exenatide is likely to fall in a range between dominant (cost and life saving) at 20% of the US price and cost-effective (with an ICER of 22,420 pounds per QALY gained) at 100% of the US price, versus insulin glargine. Based on the findings of a recent clinical trial, long-term projections indicated that exenatide is likely to be associated with improvement in life expectancy and quality-adjusted life expectancy compared to insulin glargine. The results from this modelling analysis suggest that that exenatide is likely to represent good value for money by generally accepted standards in the UK setting in individuals with type 2 diabetes inadequately controlled on oral therapy.
NASA Astrophysics Data System (ADS)
Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.
2017-10-01
Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.
Network congestion control algorithm based on Actor-Critic reinforcement learning model
NASA Astrophysics Data System (ADS)
Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen
2018-04-01
Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.
Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less
Shaw, Souradet Y; Blanchard, James F; Bernstein, Charles N
2015-04-01
Early childhood vaccinations have been hypothesized to contribute to the emergence of paediatric inflammatory bowel disease [IBD] in developed countries. Using linked population-based administrative databases, we aimed to explore the association between vaccination with measles-containing vaccines and the risk for IBD. This was a case-control study using the University of Manitoba IBD Epidemiology Database [UMIBDED]. The UMIBDED was linked to the Manitoba Immunization Monitoring System [MIMS], a population-based database of immunizations administered in Manitoba. All paediatric IBD cases in Manitoba, born after 1989 and diagnosed before March 31, 2008, were included. Controls were matched to cases on the basis of age, sex, and region of residence at time of diagnosis. Measles-containing vaccinations received in the first 2 years of life were documented, with vaccinations categorized as 'None' or 'Complete', with completeness defined according to Manitoba's vaccination schedule. Conditional logistic regression models were fitted to the data, with models adjusted for physician visits in the first 2 years of life and area-level socioeconomic status at case date. A total of 951 individuals [117 cases and 834 controls] met eligibility criteria, with average age of diagnosis among cases at 11 years. The proportion of IBD cases with completed vaccinations was 97%, compared with 94% of controls. In models adjusted for physician visits and area-level socioeconomic status, no statistically significant association was detected between completed measles vaccinations and the risk of IBD (adjusted odds ratio [AOR]: 1.5; 95% confidence interval [CI]: 0.5-4.4; p = 0.419]. No significant association between completed measles-containing vaccination in the first 2 years of life and paediatric IBD could be demonstrated in this population-based study. Copyright © 2015 European Crohn’s and Colitis Organisation (ECCO). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Hangartner, T N; Short, D F; Eldar-Geva, T; Hirsch, H J; Tiomkin, M; Zimran, A; Gross-Tsur, V
2016-12-01
Anthropometric adjustments of bone measurements are necessary in Prader-Willi syndrome patients to correctly assess the bone status of these patients. This enables physicians to get a more accurate diagnosis of normal versus abnormal bone, allow for early and effective intervention, and achieve better therapeutic results. Bone mineral density (BMD) is decreased in patients with Prader-Willi syndrome (PWS). Because of largely abnormal body height and weight, traditional BMD Z-scores may not provide accurate information in this patient group. The goal of the study was to assess a cohort of individuals with PWS and characterize the development of low bone density based on two adjustment models applied to a dataset of BMD and bone mineral content (BMC) from dual-energy X-ray absorptiometry (DXA) measurements. Fifty-four individuals, aged 5-20 years with genetically confirmed PWS, underwent DXA scans of spine and hip. Thirty-one of them also underwent total body scans. Standard Z-scores were calculated for BMD and BMC of spine and total hip based on race, sex, and age for all patients, as well as of whole body and whole-body less head for those patients with total-body scans. Additional Z-scores were generated based on anthropometric adjustments using weight, height, and percentage body fat and a second model using only weight and height in addition to race, sex, and age. As many PWS patients have abnormal anthropometrics, addition of explanatory variables weight, height, and fat resulted in different bone classifications for many patients. Thus, 25-70 % of overweight patients, previously diagnosed as normal, were subsequently diagnosed as below normal, and 40-60 % of patients with below-normal body height changed from below normal to normal depending on bone parameter. This is the first study to include anthropometric adjustments into the interpretation of BMD and BMC in children and adolescents with PWS. This enables physicians to get a more accurate diagnosis of normal versus abnormal BMD and BMC and allows for early and effective intervention.
Blohmer, J U; Rezai, M; Kümmel, S; Kühn, T; Warm, M; Friedrichs, K; Benkow, A; Valentine, W J; Eiermann, W
2013-01-01
The 21-gene assay (Oncotype DX Breast Cancer Test (Genomic Health Inc., Redwood City, CA)) is a well validated test that predicts the likelihood of adjuvant chemotherapy benefit and the 10-year risk of distant recurrence in patients with ER+, HER2- early-stage breast cancer. The aim of this analysis was to evaluate the cost-effectiveness of using the assay to inform adjuvant chemotherapy decisions in Germany. A Markov model was developed to make long-term projections of distant recurrence, survival, quality-adjusted life expectancy, and direct costs for patients with ER+, HER2-, node-negative, or up to 3 node-positive early-stage breast cancer. Scenarios using conventional diagnostic procedures or the 21-gene assay to inform treatment recommendations for adjuvant chemotherapy were modeled based on a prospective, multi-center trial in 366 patients. Transition probabilities and risk adjustment were based on published landmark trials. Costs (2011 Euros (€)) were estimated from a sick fund perspective based on resource use in patients receiving chemotherapy. Future costs and clinical benefits were discounted at 3% annually. The 21-gene assay was projected to increase mean life expectancy by 0.06 years and quality-adjusted life expectancy by 0.06 quality-adjusted life years (QALYs) compared with current clinical practice over a 30-year time horizon. Clinical benefits were driven by optimized allocation of adjuvant chemotherapy. Costs from a healthcare payer perspective were lower with the 21-gene assay by ∼€561 vs standard of care. Probabilistic sensitivity analysis indicated that there was an 87% probability that the 21-gene assay would be dominant (cost and life saving) to standard of care. Country-specific data on the risk of distant recurrence and quality-of-life were not available. Guiding decision-making on adjuvant chemotherapy using the 21-gene assay was projected to improve survival, quality-adjusted life expectancy, and be cost saving vs the current standard of care women with ER+, HER2- early-stage breast cancer.
Assessment of simulated aerosol effective radiative forcings in the terrestrial spectrum
NASA Astrophysics Data System (ADS)
Heyn, Irene; Block, Karoline; Mülmenstädt, Johannes; Gryspeerdt, Edward; Kühne, Philipp; Salzmann, Marc; Quaas, Johannes
2017-01-01
In its fifth assessment report (AR5), the Intergovernmental Panel on Climate Change provides a best estimate of the effective radiative forcing (ERF) due to anthropogenic aerosol at -0.9 W m-2. This value is considerably weaker than the estimate of -1.2 W m-2 in AR4. A part of the difference can be explained by an offset of +0.2 W m-2 which AR5 added to all published estimates that only considered the solar spectrum, in order to account for adjustments in the terrestrial spectrum. We find that, in the CMIP5 multimodel median, the ERF in the terrestrial spectrum is small, unless microphysical effects on ice- and mixed-phase clouds are parameterized. In the latter case it is large but accompanied by a very strong ERF in the solar spectrum. The total adjustments can be separated into microphysical adjustments (aerosol "effects") and thermodynamic adjustments. Using a kernel technique, we quantify the latter and find that the rapid thermodynamic adjustments of water vapor and temperature profiles are small. Observation-based constraints on these model results are urgently needed.
Lozano, Oscar M; Rojas, Antonio J; Pérez, Cristino; González-Sáiz, Francisco; Ballesta, Rosario; Izaskun, Bilbao
2008-05-01
The aim of this work is to show evidence of the validity of the Health-Related Quality of Life for Drug Abusers Test (HRQoLDA Test). This test was developed to measure specific HRQoL for drugs abusers, within the theoretical addiction framework of the biaxial model. The sample comprised 138 patients diagnosed with opiate drug dependence. In this study, the following constructs and variables of the biaxial model were measured: severity of dependence, physical health status, psychological adjustment and substance consumption. Results indicate that the HRQoLDA Test scores are related to dependency and consumption-related problems. Multiple regression analysis reveals that HRQoL can be predicted from drug dependence, physical health status and psychological adjustment. These results contribute empirical evidence of the theoretical relationships established between HRQoL and the biaxial model, and they support the interpretation of the HRQoLDA Test to measure HRQoL in drug abusers, thus providing a test to measure this specific construct in this population.
Mulcahy, Andrew W; Chan, Chris; Hirshman, Samuel; Huckfeldt, Peter J; Kofner, Aaron; Liu, Jodi L; Lovejoy, Susan L; Popescu, Ioana; Timbie, Justin W; Hussey, Peter S
2015-07-15
Gastroenterology and cardiology services are common and costly among Medicare beneficiaries. Episode-based payment, which aims to create incentives for high-quality, low-cost care, has been identified as a promising alternative payment model. This article describes research related to the design of episode-based payment models for ambulatory gastroenterology and cardiology services for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare and Medicaid Services (CMS). The authors analyzed Medicare claims data to describe the frequency and characteristics of gastroenterology and cardiology index procedures, the practices that delivered index procedures, and the patients that received index procedures. The results of these analyses can help inform CMS decisions about the definition of episodes in an episode-based payment model; payment adjustments for service setting, multiple procedures, or other factors; and eligibility for the payment model.
Price, A.; Peterson, James T.
2010-01-01
Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.
Adjusting Satellite Rainfall Error in Mountainous Areas for Flood Modeling Applications
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.
2014-12-01
This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.
ERIC Educational Resources Information Center
Siman-Tov, Ayelet; Kaniel, Shlomo
2011-01-01
The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…
Redistribution by insurance market regulation: Analyzing a ban on gender-based retirement annuities.
Finkelstein, Amy; Poterba, James; Rothschild, Casey
2009-01-01
We illustrate how equilibrium screening models can be used to evaluate the economic consequences of insurance market regulation. We calibrate and solve a model of the United Kingdom's compulsory annuity market and examine the impact of gender-based pricing restrictions. We find that the endogenous adjustment of annuity contract menus in response to such restrictions can undo up to half of the redistribution from men to women that would occur with exogenous Social Security-like annuity contracts. Our findings indicate the importance of endogenous contract responses and illustrate the feasibility of employing theoretical insurance market equilibrium models for quantitative policy analysis.
Using enterprise architecture to analyse how organisational structure impact motivation and learning
NASA Astrophysics Data System (ADS)
Närman, Pia; Johnson, Pontus; Gingnell, Liv
2016-06-01
When technology, environment, or strategies change, organisations need to adjust their structures accordingly. These structural changes do not always enhance the organisational performance as intended partly because organisational developers do not understand the consequences of structural changes in performance. This article presents a model-based analysis framework for quantitative analysis of the effect of organisational structure on organisation performance in terms of employee motivation and learning. The model is based on Mintzberg's work on organisational structure. The quantitative analysis is formalised using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) and implemented in an enterprise architecture tool.
Redistribution by insurance market regulation: Analyzing a ban on gender-based retirement annuities
Finkelstein, Amy; Poterba, James; Rothschild, Casey
2009-01-01
We illustrate how equilibrium screening models can be used to evaluate the economic consequences of insurance market regulation. We calibrate and solve a model of the United Kingdom’s compulsory annuity market and examine the impact of gender-based pricing restrictions. We find that the endogenous adjustment of annuity contract menus in response to such restrictions can undo up to half of the redistribution from men to women that would occur with exogenous Social Security-like annuity contracts. Our findings indicate the importance of endogenous contract responses and illustrate the feasibility of employing theoretical insurance market equilibrium models for quantitative policy analysis. PMID:20046907
ERIC Educational Resources Information Center
Pakenham, Kenneth I.; Samios, Christina; Sofronoff, Kate
2005-01-01
The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between…
Problem behaviour and traumatic dental injuries in adolescents.
Ramchandani, Damini; Marcenes, Wagner; Stansfeld, Stephen A; Bernabé, Eduardo
2016-02-01
To explore the relationship between problem behaviour and traumatic dental injuries (TDI) among 15- to 16-year-old schoolchildren from East London. This cross-sectional study used data from 794 adolescents who participated in phase III of the Research with East London Adolescents Community Health Survey (RELACHS), a school-based prospective study of a representative sample of adolescents. Participants completed a questionnaire and were clinically examined for TDI, overjet and lip coverage. The Strength and Difficulties Questionnaire (SDQ) was used to assess problem behaviour, which provided a total score and five domain scores (emotional symptoms, conduct problems, hyperactivity, peer problems and pro-social behaviour). The association between problem behaviour and TDI was assessed in unadjusted and adjusted logistic regression models. Adjusted models controlled for demographic (sex, age and ethnicity), socio-economic (parental employment) and clinical factors (overjet and lip coverage). The prevalence of TDI was 17% and the prevalence of problem behaviour, according to the SDQ, was 10%. In the adjusted model, adolescents with problem behaviour were 1.87 (95% confidence interval: 1.03-3.37) times more likely to have TDI than those without problem behaviour. In subsequent analysis by SDQ domains, it was found that only peer problems were associated with TDI (OR = 1.78, 95% CI: 1.01-3.14), even after adjustment for confounders. This study found evidence for a relationship between problem behaviour and TDI among adolescents, which was mainly due to peer relationship problems. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Reliability evaluation of microgrid considering incentive-based demand response
NASA Astrophysics Data System (ADS)
Huang, Ting-Cheng; Zhang, Yong-Jun
2017-07-01
Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.
Ljoså, Cathrine Haugene; Tyssen, Reidar; Lau, Bjørn
2011-11-01
This study aimed to investigate the association between individual and psychosocial work factors and mental distress among offshore shift workers in the Norwegian petroleum industry. All 2406 employees of a large Norwegian oil and gas company, who worked offshore during a two-week period in August 2006, were invited to participate in the web-based survey. Completed questionnaires were received from 1336 employees (56% response rate). The outcome variable was mental distress, assessed with a shortened version of the Hopkins Symptom Checklist (HSCL-5). The following individual factors were adjusted for: age, gender, marital status, and shift work locus of control. Psychosocial work factors included: night work, demands, control and support, and shift work-home interference. The level of mental distress was higher among men than women. In the adjusted regression model, the following were associated with mental distress: (i) high scores on quantitative demands, (ii) low level of support, and (iii) high level of shift work-home interference. Psychosocial work factors explained 76% of the total explained variance (adjusted R (²)=0.21) in the final adjusted model. Psychosocial work factors, such as quantitative demands, support, and shift work-home interference were independently associated with mental distress. Shift schedules were only univariately associated with mental distress.
Chen, San-Ni; Lian, Iebin; Chen, Yi-Chiao; Ho, Jau-Der
2015-02-01
To investigate peptic ulcer disease and other possible risk factors in patients with central serous chorioretinopathy (CSR) using a population-based database. In this population-based retrospective cohort study, longitudinal data from the Taiwan National Health Insurance Research Database were analyzed. The study cohort comprised 835 patients with CSR and the control cohort comprised 4175 patients without CSR from January 2000 to December 2009. Conditional logistic regression was applied to examine the association of peptic ulcer disease and other possible risk factors for CSR, and stratified Cox regression models were applied to examine whether patients with CSR have an increased chance of peptic ulcer disease and hypertension development. The identifiable risk factors for CSR included peptic ulcer disease (adjusted odd ratio: 1.39, P = 0.001) and higher monthly income (adjusted odd ratio: 1.30, P = 0.006). Patients with CSR also had a significantly higher chance of developing peptic ulcer disease after the diagnosis of CSR (adjusted odd ratio: 1.43, P = 0.009). Peptic ulcer disease and higher monthly income are independent risk factors for CSR. Whereas, patients with CSR also had increased risk for peptic ulcer development.
Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum
2016-03-01
Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. Copyright © 2016 Elsevier B.V. All rights reserved.
Research on the thickness control method of workbench oil film based on theoretical model
NASA Astrophysics Data System (ADS)
Pei, Tang; Lin, Lin; Liu, Ge; Yu, Liping; Xu, Zhen; Zhao, Di
2018-06-01
To improve the thickness adjustability of the workbench oil film, we designed a software system to control the thickness of oil film based on the Siemens 840dsl CNC system and set up an experimental platform. A regulation scheme of oil film thickness based on theoretical model is proposed, the accuracy and feasibility of which is proved by experiment results. It's verified that the method mentioned above can meet the demands of workbench oil film thickness control, the experiment is simple and efficient with high control precision. Reliable theory support is supplied for the development of workbench oil film active control system as well.
The Nottingham Adjustment Scale: a validation study.
Dodds, A G; Flannigan, H; Ng, L
1993-09-01
The concept of adjustment to acquired sight loss is examined in the context of existing loss models. An alternative conceptual framework is presented which addresses the 'blindness experience', and which suggests that the depression so frequently encountered in those losing their sight can be understood better by recourse to cognitive factors than to psychoanalytically based theories of grieving. A scale to measure psychological status before and after rehabilitation is described, its factorial validity is demonstrated, and its validity in enabling changes to be measured. Practitioners are encouraged to adopt a similar perspective in other areas of acquired disability.
Punamäki, R L; Qouta, S; el Sarraj, E
1997-08-01
The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.
Griffiths, Thomas; Kanjee, Zahir; Battistoli, Dale; Dorr, Lorenzo; Lorenzen, Breeanna; Thomson, Dana R.; Waters, Ami; Roberts, Ruth; Smith, Wilmot L.; Kraemer, John D.
2016-01-01
Background The Ebola virus disease (EVD) epidemic has threatened access to basic health services through facility closures, resource diversion, and decreased demand due to community fear and distrust. While modeling studies have attempted to estimate the impact of these disruptions, no studies have yet utilized population-based survey data. Methods and Findings We conducted a two-stage, cluster-sample household survey in Rivercess County, Liberia, in March–April 2015, which included a maternal and reproductive health module. We constructed a retrospective cohort of births beginning 4 y before the first day of survey administration (beginning March 24, 2011). We then fit logistic regression models to estimate associations between our primary outcome, facility-based delivery (FBD), and time period, defined as the pre-EVD period (March 24, 2011–June 14, 2014) or EVD period (June 15, 2014–April 13, 2015). We fit both univariable and multivariable models, adjusted for known predictors of facility delivery, accounting for clustering using linearized standard errors. To strengthen causal inference, we also conducted stratified analyses to assess changes in FBD by whether respondents believed that health facility attendance was an EVD risk factor. A total of 1,298 women from 941 households completed the survey. Median age at the time of survey was 29 y, and over 80% had a primary education or less. There were 686 births reported in the pre-EVD period and 212 in the EVD period. The unadjusted odds ratio of facility-based delivery in the EVD period was 0.66 (95% confidence interval [CI] 0.48–0.90, p-value = 0.010). Adjustment for potential confounders did not change the observed association, either in the principal model (adjusted odds ratio [AOR] = 0.70, 95%CI 0.50–0.98, p = 0.037) or a fully adjusted model (AOR = 0.69, 95%CI 0.50–0.97, p = 0.033). The association was robust in sensitivity analyses. The reduction in FBD during the EVD period was observed among those reporting a belief that health facilities are or may be a source of Ebola transmission (AOR = 0.59, 95%CI 0.36–0.97, p = 0.038), but not those without such a belief (AOR = 0.90, 95%CI 0.59–1.37, p = 0.612). Limitations include the possibility of FBD secular trends coincident with the EVD period, recall errors, and social desirability bias. Conclusions We detected a 30% decreased odds of FBD after the start of EVD in a rural Liberian county with relatively few cases. Because health facilities never closed in Rivercess County, this estimate may under-approximate the effect seen in the most heavily affected areas. These are the first population-based survey data to show collateral disruptions to facility-based delivery caused by the West African EVD epidemic, and they reinforce the need to consider the full spectrum of implications caused by public health emergencies. PMID:27482706
Automating Rule Strengths in Expert Systems.
1987-05-01
systems were designed in an incremental, iterative way. One of the most easily identifiable phases in this process, sometimes called tuning, consists...attenuators. The designer of the knowledge-based system must determine (synthesize) or adjust (xfine, if estimates of the values are given) these...values. We consider two ways in which the designer can learn the values. We call the first model of learning the complete case and the second model the