Nonlinearity analysis of measurement model for vision-based optical navigation system
NASA Astrophysics Data System (ADS)
Li, Jianguo; Cui, Hutao; Tian, Yang
2015-02-01
In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet
2011-10-01
The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations. Only the model for one specimen met the validation criteria for average and peak pressure of both articulations; however the experimental measures for peak pressure also exhibited high variability. MRI-based modeling can reliably be used for evaluating the contact area and contact force with similar confidence as in currently available experimental techniques. Average contact pressure, and peak contact pressure were more variable from all measurement techniques, and these measures from MRI-based modeling should be used with some caution.
NASA Astrophysics Data System (ADS)
Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun
2014-12-01
A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).
Comparison of measurement- and proxy-based Vs30 values in California
Yong, Alan K.
2016-01-01
This study was prompted by the recent availability of a significant amount of openly accessible measured VS30 values and the desire to investigate the trend of using proxy-based models to predict VS30 in the absence of measurements. Comparisons between measured and model-based values were performed. The measured data included 503 VS30 values collected from various projects for 482 seismographic station sites in California. Six proxy-based models—employing geologic mapping, topographic slope, and terrain classification—were also considered. Included was a new terrain class model based on the Yong et al. (2012) approach but recalibrated with updated measured VS30 values. Using the measured VS30 data as the metric for performance, the predictive capabilities of the six models were determined to be statistically indistinguishable. This study also found three models that tend to underpredict VS30 at lower velocities (NEHRP Site Classes D–E) and overpredict at higher velocities (Site Classes B–C).
Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam
2015-03-01
Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Kerckhoffs, Jules; Hoek, Gerard; Vlaanderen, Jelle; van Nunen, Erik; Messier, Kyle; Brunekreef, Bert; Gulliver, John; Vermeulen, Roel
2017-11-01
Land-use regression (LUR) models for ultrafine particles (UFP) and Black Carbon (BC) in urban areas have been developed using short-term stationary monitoring or mobile platforms in order to capture the high variability of these pollutants. However, little is known about the comparability of predictions of mobile and short-term stationary models and especially the validity of these models for assessing residential exposures and the robustness of model predictions developed in different campaigns. We used an electric car to collect mobile measurements (n = 5236 unique road segments) and short-term stationary measurements (3 × 30min, n = 240) of UFP and BC in three Dutch cities (Amsterdam, Utrecht, Maastricht) in 2014-2015. Predictions of LUR models based on mobile measurements were compared to (i) measured concentrations at the short-term stationary sites, (ii) LUR model predictions based on short-term stationary measurements at 1500 random addresses in the three cities, (iii) externally obtained home outdoor measurements (3 × 24h samples; n = 42) and (iv) predictions of a LUR model developed based upon a 2013 mobile campaign in two cities (Amsterdam, Rotterdam). Despite the poor model R 2 of 15%, the ability of mobile UFP models to predict measurements with longer averaging time increased substantially from 36% for short-term stationary measurements to 57% for home outdoor measurements. In contrast, the mobile BC model only predicted 14% of the variation in the short-term stationary sites and also 14% of the home outdoor sites. Models based upon mobile and short-term stationary monitoring provided fairly high correlated predictions of UFP concentrations at 1500 randomly selected addresses in the three Dutch cities (R 2 = 0.64). We found higher UFP predictions (of about 30%) based on mobile models opposed to short-term model predictions and home outdoor measurements with no clear geospatial patterns. The mobile model for UFP was stable over different settings as the model predicted concentration levels highly correlated to predictions made by a previously developed LUR model with another spatial extent and in a different year at the 1500 random addresses (R 2 = 0.80). In conclusion, mobile monitoring provided robust LUR models for UFP, valid to use in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Copula based prediction models: an application to an aortic regurgitation study
Kumar, Pranesh; Shoukri, Mohamed M
2007-01-01
Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Measurement-based quantum communication with resource states generated by entanglement purification
NASA Astrophysics Data System (ADS)
Wallnöfer, J.; Dür, W.
2017-01-01
We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.
ERIC Educational Resources Information Center
Fulmer, Gavin W.; Liang, Ling L.
2013-01-01
This study tested a student survey to detect differences in instruction between teachers in a modeling-based science program and comparison group teachers. The Instructional Activities Survey measured teachers' frequency of modeling, inquiry, and lecture instruction. Factor analysis and Rasch modeling identified three subscales, Modeling and…
NASA Astrophysics Data System (ADS)
El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis
2018-02-01
Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.
Determinants of perceived sleep quality in normal sleepers.
Goelema, M S; Regis, M; Haakma, R; van den Heuvel, E R; Markopoulos, P; Overeem, S
2017-09-20
This study aimed to establish the determinants of perceived sleep quality over a longer period of time, taking into account the separate contributions of actigraphy-based sleep measures and self-reported sleep indices. Fifty participants (52 ± 6.6 years; 27 females) completed two consecutive weeks of home monitoring, during which they kept a sleep-wake diary while their sleep was monitored using a wrist-worn actigraph. The diary included questions on perceived sleep quality, sleep-wake information, and additional factors such as well-being and stress. The data were analyzed using multilevel models to compare a model that included only actigraphy-based sleep measures (model Acti) to a model that included only self-reported sleep measures to explain perceived sleep quality (model Self). In addition, a model based on the self-reported sleep measures and extended with nonsleep-related factors was analyzed to find the most significant determinants of perceived sleep quality (model Extended). Self-reported sleep measures (model Self) explained 61% of the total variance, while actigraphy-based sleep measures (model Acti) only accounted for 41% of the perceived sleep quality. The main predictors in the self-reported model were number of awakenings during the night, sleep onset latency, and wake time after sleep onset. In the extended model, the number of awakenings during the night and total sleep time of the previous night were the strongest determinants of perceived sleep quality, with 64% of the variance explained. In our cohort, perceived sleep quality was mainly determined by self-reported sleep measures and less by actigraphy-based sleep indices. These data further stress the importance of taking multiple nights into account when trying to understand perceived sleep quality.
Model based design introduction: modeling game controllers to microprocessor architectures
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen
2014-01-01
Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.
An Approach to the Evaluation of Hypermedia.
ERIC Educational Resources Information Center
Knussen, Christina; And Others
1991-01-01
Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Accuracy assessment for a multi-parameter optical calliper in on line automotive applications
NASA Astrophysics Data System (ADS)
D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.
2017-08-01
In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.
ERIC Educational Resources Information Center
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann
2015-01-01
Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression
Weiss, Brandi A.; Dardick, William
2015-01-01
This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.
Weiss, Brandi A; Dardick, William
2016-12-01
This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.
ERIC Educational Resources Information Center
Skinner, Ellen A.; Chi, Una
2012-01-01
Building on self-determination theory, this study presents a model of intrinsic motivation and engagement as "active ingredients" in garden-based education. The model was used to create reliable and valid measures of key constructs, and to guide the empirical exploration of motivational processes in garden-based learning. Teacher- and…
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Performance Modeling of an Airborne Raman Water Vapor Lidar
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.
2000-01-01
A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
Multivariate prediction of odor from pig production based on in-situ measurement of odorants
NASA Astrophysics Data System (ADS)
Hansen, Michael J.; Jonassen, Kristoffer E. N.; Løkke, Mette Marie; Adamsen, Anders Peter S.; Feilberg, Anders
2016-06-01
The aim of the present study was to estimate a prediction model for odor from pig production facilities based on measurements of odorants by Proton-Transfer-Reaction Mass spectrometry (PTR-MS). Odor measurements were performed at four different pig production facilities with and without odor abatement technologies using a newly developed mobile odor laboratory equipped with a PTR-MS for measuring odorants and an olfactometer for measuring the odor concentration by human panelists. A total of 115 odor measurements were carried out in the mobile laboratory and simultaneously air samples were collected in Nalophan bags and analyzed at accredited laboratories after 24 h. The dataset was divided into a calibration dataset containing 94 samples and a validation dataset containing 21 samples. The prediction model based on the measurements in the mobile laboratory was able to explain 74% of the variation in the odor concentration based on odorants, whereas the prediction models based on odor measurements with bag samples explained only 46-57%. This study is the first application of direct field olfactometry to livestock odor and emphasizes the importance of avoiding any bias from sample storage in studies of odor-odorant relationships. Application of the model on the validation dataset gave a high correlation between predicted and measured odor concentration (R2 = 0.77). Significant odorants in the prediction models include phenols and indoles. In conclusion, measurements of odorants on-site in pig production facilities is an alternative to dynamic olfactometry that can be applied for measuring odor from pig houses and the effects of odor abatement technologies.
Skrzyński, Witold
2014-11-01
The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Assessing alternative measures of wealth in health research.
Cubbin, Catherine; Pollack, Craig; Flaherty, Brian; Hayward, Mark; Sania, Ayesha; Vallone, Donna; Braveman, Paula
2011-05-01
We assessed whether it would be feasible to replace the standard measure of net worth with simpler measures of wealth in population-based studies examining associations between wealth and health. We used data from the 2004 Survey of Consumer Finances (respondents aged 25-64 years) and the 2004 Health and Retirement Survey (respondents aged 50 years or older) to construct logistic regression models relating wealth to health status and smoking. For our wealth measure, we used the standard measure of net worth as well as 9 simpler measures of wealth, and we compared results among the 10 models. In both data sets and for both health indicators, models using simpler wealth measures generated conclusions about the association between wealth and health that were similar to the conclusions generated by models using net worth. The magnitude and significance of the odds ratios were similar for the covariates in multivariate models, and the model-fit statistics for models using these simpler measures were similar to those for models using net worth. Our findings suggest that simpler measures of wealth may be acceptable in population-based studies of health.
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression
ERIC Educational Resources Information Center
Weiss, Brandi A.; Dardick, William
2016-01-01
This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…
Liu, Chuan-Fen; Sales, Anne E; Sharp, Nancy D; Fishman, Paul; Sloan, Kevin L; Todd-Stenberg, Jeff; Nichol, W Paul; Rosen, Amy K; Loveland, Susan
2003-01-01
Objective To compare the rankings for health care utilization performance measures at the facility level in a Veterans Health Administration (VHA) health care delivery network using pharmacy- and diagnosis-based case-mix adjustment measures. Data Sources/Study Setting The study included veterans who used inpatient or outpatient services in Veterans Integrated Service Network (VISN) 20 during fiscal year 1998 (October 1997 to September 1998; N=126,076). Utilization and pharmacy data were extracted from VHA national databases and the VISN 20 data warehouse. Study Design We estimated concurrent regression models using pharmacy or diagnosis information in the base year (FY1998) to predict health service utilization in the same year. Utilization measures included bed days of care for inpatient care and provider visits for outpatient care. Principal Findings Rankings of predicted utilization measures across facilities vary by case-mix adjustment measure. There is greater consistency within the diagnosis-based models than between the diagnosis- and pharmacy-based models. The eight facilities were ranked differently by the diagnosis- and pharmacy-based models. Conclusions Choice of case-mix adjustment measure affects rankings of facilities on performance measures, raising concerns about the validity of profiling practices. Differences in rankings may reflect differences in comparability of data capture across facilities between pharmacy and diagnosis data sources, and unstable estimates due to small numbers of patients in a facility. PMID:14596393
Model-based pH monitor for sensor assessment.
van Schagen, Kim; Rietveld, Luuk; Veersma, Alex; Babuska, Robert
2009-01-01
Owing to the nature of the treatment processes, monitoring the processes based on individual online measurements is difficult or even impossible. However, the measurements (online and laboratory) can be combined with a priori process knowledge, using mathematical models, to objectively monitor the treatment processes and measurement devices. The pH measurement is a commonly used measurement at different stages in the drinking water treatment plant, although it is a unreliable instrument, requiring significant maintenance. It is shown that, using a grey-box model, it is possible to assess the measurement devices effectively, even if detailed information of the specific processes is unknown.
Low-energy proton induced M X-ray production cross sections for 70Yb, 81Tl and 82Pb
NASA Astrophysics Data System (ADS)
Shehla; Mandal, A.; Kumar, Ajay; Roy Chowdhury, M.; Puri, Sanjiv; Tribedi, L. C.
2018-07-01
The cross sections for production of Mk (k = Mξ, Mαβ, Mγ, Mm1) X-rays of 70Yb, 81Tl and 82Pb induced by 50-250 keV protons have been measured in the present work. The experimental cross sections have been compared with the earlier reported values and those calculated using the ionization cross sections based on the ECPSSR (Perturbed (P) stationary(S) state(S), incident ion energy (E) loss, Coulomb (C) deflection and relativistic (R) correction) model, the X-ray emission rates based on the Dirac-Fock model, the fluorescence and Coster-Kronig yields based on the Dirac-Hartree-Slater (DHS) model. In addition, the present measured proton induced X-ray production cross sections have also been compared with those calculated using the Dirac-Hartree-Slater (DHS) model based ionization cross sections and those based on the Plane wave Born Approximation (PWBA). The measured M X-ray production cross sections are, in general, found to be higher than the ECPSSR and DHS model based values and lower than the PWBA model based cross sections.
Gruber-Baldini, Ann L.; Hicks, Gregory; Ostir, Glen; Klinedinst, N. Jennifer; Orwig, Denise; Magaziner, Jay
2015-01-01
Background Measurement of physical function post hip fracture has been conceptualized using multiple different measures. Purpose This study tested a comprehensive measurement model of physical function. Design This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Methods Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living and performance was tested for fit at 2 and 12 months post hip fracture and among male and female participants and validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise and social activities post hip fracture. Findings The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Conclusion Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participant Clinical Implications The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. Practical but useful assessment of function should be considered and monitored over the recovery trajectory post hip fracture. PMID:26492866
Andrew D. Richardson; Mathew Williams; David Y. Hollinger; David J.P. Moore; D. Bryan Dail; Eric A. Davidson; Neal A. Scott; Robert S. Evans; Holly. Hughes
2010-01-01
We conducted an inverse modeling analysis, using a variety of data streams (tower-based eddy covariance measurements of net ecosystem exchange, NEE, of CO2, chamber-based measurements of soil respiration, and ancillary ecological measurements of leaf area index, litterfall, and woody biomass increment) to estimate parameters and initial carbon (C...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
NASA Astrophysics Data System (ADS)
Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.
2017-11-01
Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.
NASA Astrophysics Data System (ADS)
Cleary, P. A.; Fuhrman, N.; Schulz, L.; Schafer, J.; Fillingham, J.; Bootsma, H.; McQueen, J.; Tang, Y.; Langel, T.; McKeen, S.; Williams, E. J.; Brown, S. S.
2015-05-01
Air quality forecast models typically predict large summertime ozone abundances over water relative to land in the Great Lakes region. While each state bordering Lake Michigan has dedicated monitoring systems, offshore measurements have been sparse, mainly executed through specific short-term campaigns. This study examines ozone abundances over Lake Michigan as measured on the Lake Express ferry, by shoreline differential optical absorption spectroscopy (DOAS) observations in southeastern Wisconsin and as predicted by the Community Multiscale Air Quality (CMAQ) model. From 2008 to 2009 measurements of O3, SO2, NO2 and formaldehyde were made in the summertime by DOAS at a shoreline site in Kenosha, WI. From 2008 to 2010 measurements of ambient ozone were conducted on the Lake Express, a high-speed ferry that travels between Milwaukee, WI, and Muskegon, MI, up to six times daily from spring to fall. Ferry ozone observations over Lake Michigan were an average of 3.8 ppb higher than those measured at shoreline in Kenosha, with little dependence on position of the ferry or temperature and with greatest differences during evening and night. Concurrent 1-48 h forecasts from the CMAQ model in the upper Midwestern region surrounding Lake Michigan were compared to ferry ozone measurements, shoreline DOAS measurements and Environmental Protection Agency (EPA) station measurements. The bias of the model O3 forecast was computed and evaluated with respect to ferry-based measurements. Trends in the bias with respect to location and time of day were explored showing non-uniformity in model bias over the lake. Model ozone bias was consistently high over the lake in comparison to land-based measurements, with highest biases for 25-48 h after initialization.
Measurable realistic image-based 3D mapping
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.
2011-12-01
Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.
Indrehus, Oddny; Aralt, Tor Tybring
2005-04-01
Aerosol, NO and CO concentration, temperature, air humidity, air flow and number of running ventilation fans were measured by continuous analysers every minute for a whole week for six different one-week periods spread over ten months in 2001 and 2002 at measuring stations in the 7860 m long tunnel. The ventilation control system was mainly based on aerosol measurements taken by optical scatter sensors. The ventilation turned out to be satisfactory according to Norwegian air quality standards for road tunnels; however, there was some uncertainty concerning the NO2 levels. The air humidity and temperature inside the tunnel were highly influenced by the outside metrological conditions. Statistical models for NO concentration were developed and tested; correlations between predicted and measured NO were 0.81 for a partial least squares regression (PLS1) model based on CO and aerosol, and 0.77 for a linear regression model based only on aerosol. Hence, the ventilation control system should not solely be based on aerosol measurements. Since NO2 is the hazardous polluter, modelling NO2 concentration rather than NO should be preferred in any further optimising of the ventilation control.
Vehicle-specific emissions modeling based upon on-road measurements.
Frey, H Christopher; Zhang, Kaishan; Rouphail, Nagui M
2010-05-01
Vehicle-specific microscale fuel use and emissions rate models are developed based upon real-world hot-stabilized tailpipe measurements made using a portable emissions measurement system. Consecutive averaging periods of one to three multiples of the response time are used to compare two semiempirical physically based modeling schemes. One scheme is based on internally observable variables (IOVs), such as engine speed and manifold absolute pressure, while the other is based on externally observable variables (EOVs), such as speed, acceleration, and road grade. For NO, HC, and CO emission rates, the average R(2) ranged from 0.41 to 0.66 for the former and from 0.17 to 0.30 for the latter. The EOV models have R(2) for CO(2) of 0.43 to 0.79 versus 0.99 for the IOV models. The models are sensitive to episodic events in driving cycles such as high acceleration. Intervehicle and fleet average modeling approaches are compared; the former account for microscale variations that might be useful for some types of assessments. EOV-based models have practical value for traffic management or simulation applications since IOVs usually are not available or not used for emission estimation.
ERIC Educational Resources Information Center
Fazio, C.; Guastella, I.; Tarantino, G.
2007-01-01
In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the…
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
Judging Alignment of Curriculum-Based Measures in Mathematics and Common Core Standards
ERIC Educational Resources Information Center
Morton, Christopher
2013-01-01
Measurement literature supports the utility of alignment models for application with state standards and large-scale assessments. However, the literature is lacking in the application of these models to curriculum-based measures (CBMs) and common core standards. In this study, I investigate the alignment of CBMs and standards, with specific…
Outcome Measures for Early Childhood Intervention Services.
ERIC Educational Resources Information Center
Accreditation Council on Services for People with Disabilities, Landover, MD.
This collection of 21 suggested outcome measures for early childhood intervention services is intended to apply to all types of service and support program models for children (birth to age 5) with various developmental delays and/or disabilities. The measures are appropriate for either home-based or center-based service delivery models. Section 1…
ISS Plasma Interaction: Measurements and Modeling
NASA Technical Reports Server (NTRS)
Barsamian, H.; Mikatarian, R.; Alred, J.; Minow, J.; Koontz, S.
2004-01-01
Ionospheric plasma interaction effects on the International Space Station are discussed in the following paper. The large structure and high voltage arrays of the ISS represent a complex system interacting with LEO plasma. Discharge current measurements made by the Plasma Contactor Units and potential measurements made by the Floating Potential Probe delineate charging and magnetic induction effects on the ISS. Based on theoretical and physical understanding of the interaction phenomena, a model of ISS plasma interaction has been developed. The model includes magnetic induction effects, interaction of the high voltage solar arrays with ionospheric plasma, and accounts for other conductive areas on the ISS. Based on these phenomena, the Plasma Interaction Model has been developed. Limited verification of the model has been performed by comparison of Floating Potential Probe measurement data to simulations. The ISS plasma interaction model will be further tested and verified as measurements from the Floating Potential Measurement Unit become available, and construction of the ISS continues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childs, Andrew M.; Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139; Leung, Debbie W.
We present unified, systematic derivations of schemes in the two known measurement-based models of quantum computation. The first model (introduced by Raussendorf and Briegel, [Phys. Rev. Lett. 86, 5188 (2001)]) uses a fixed entangled state, adaptive measurements on single qubits, and feedforward of the measurement results. The second model (proposed by Nielsen, [Phys. Lett. A 308, 96 (2003)] and further simplified by Leung, [Int. J. Quant. Inf. 2, 33 (2004)]) uses adaptive two-qubit measurements that can be applied to arbitrary pairs of qubits, and feedforward of the measurement results. The underlying principle of our derivations is a variant of teleportationmore » introduced by Zhou, Leung, and Chuang, [Phys. Rev. A 62, 052316 (2000)]. Our derivations unify these two measurement-based models of quantum computation and provide significantly simpler schemes.« less
A Multinomial Model of Event-Based Prospective Memory
ERIC Educational Resources Information Center
Smith, Rebekah E.; Bayen, Ute J.
2004-01-01
Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…
van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L
2011-10-13
Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Liu, Hesen; Zhu, Lin; Pan, Zhuohong; ...
2015-09-14
One of the main drawbacks of the existing oscillation damping controllers that are designed based on offline dynamic models is adaptivity to the power system operating condition. With the increasing availability of wide-area measurements and the rapid development of system identification techniques, it is possible to identify a measurement-based transfer function model online that can be used to tune the oscillation damping controller. Such a model could capture all dominant oscillation modes for adaptive and coordinated oscillation damping control. our paper describes a comprehensive approach to identify a low-order transfer function model of a power system using a multi-input multi-outputmore » (MIMO) autoregressive moving average exogenous (ARMAX) model. This methodology consists of five steps: 1) input selection; 2) output selection; 3) identification trigger; 4) model estimation; and 5) model validation. The proposed method is validated by using ambient data and ring-down data in the 16-machine 68-bus Northeast Power Coordinating Council system. Our results demonstrate that the measurement-based model using MIMO ARMAX can capture all the dominant oscillation modes. Compared with the MIMO subspace state space model, the MIMO ARMAX model has equivalent accuracy but lower order and improved computational efficiency. The proposed model can be applied for adaptive and coordinated oscillation damping control.« less
An Experimental Study on the Iso-Content-Based Angle Similarity Measure.
ERIC Educational Resources Information Center
Zhang, Jin; Rasmussen, Edie M.
2002-01-01
Retrieval performance of the iso-content-based angle similarity measure within the angle, distance, conjunction, disjunction, and ellipse retrieval models is compared with retrieval performance of the distance similarity measure and the angle similarity measure. Results show the iso-content-based angle similarity measure achieves satisfactory…
Huang, Zan; Li, Yanlin; Hu, Meng; Li, Jian; You, Zhimin; Wang, Guoliang; He, Chuan
2015-02-01
To study the difference of femoral condylar twist angle (CTA) measurement in three dimensional (3-D) reconstruction digital models of human knee joint based on the two dimensional (2-D) images of MRI and CT so as to provide a reference for selecting the best method of CTA measurement in preoperative design for the femoral prosthesis rotational position. The CTA of 10 human cadaveric knee joint was measured in 3-D digital models based on MRI (group A), in 3-D digital models based on CT (group B), in the cadaveric knee joint with cartilage (group C), and in the cadaveric knee joint without cartilage (group D), respectively. The statistical analysis of the differences was made among the measurements of the CTA. The CTA values measured in 3-D digital models were (6.43 ± 0.53) degrees in group A and (3.31 ± 1.07) degrees in group B, showing significant difference (t = 10.235, P = 0.000). The CTA values measured in the cadaveric knee joint were (5.21 ± 1.28) degrees in group C and (3.33 ± 1.12) degrees in group D, showing significant difference (t = 5.770, P = 0.000). There was significant difference in the CTA values between group B and group C (t = 5.779, P = 0.000), but no significant difference was found between group A and group C (t = 3.219, P = 0.110). The CTA values measured in the 3-D digital models based on MRI are closer to the actual values measured in the knee joint with cartilage, and benefit for preoperative plan.
Experimental demonstration of a measurement-based realisation of a quantum channel
NASA Astrophysics Data System (ADS)
McCutcheon, W.; McMillan, A.; Rarity, J. G.; Tame, M. S.
2018-03-01
We introduce and experimentally demonstrate a method for realising a quantum channel using the measurement-based model. Using a photonic setup and modifying the basis of single-qubit measurements on a four-qubit entangled cluster state, representative channels are realised for the case of a single qubit in the form of amplitude and phase damping channels. The experimental results match the theoretical model well, demonstrating the successful performance of the channels. We also show how other types of quantum channels can be realised using our approach. This work highlights the potential of the measurement-based model for realising quantum channels which may serve as building blocks for simulations of realistic open quantum systems.
NASA Astrophysics Data System (ADS)
Fulmer, Gavin W.; Liang, Ling L.
2013-02-01
This study tested a student survey to detect differences in instruction between teachers in a modeling-based science program and comparison group teachers. The Instructional Activities Survey measured teachers' frequency of modeling, inquiry, and lecture instruction. Factor analysis and Rasch modeling identified three subscales, Modeling and Reflecting, Communicating and Relating, and Investigative Inquiry. As predicted, treatment group teachers engaged in modeling and inquiry instruction more than comparison teachers, with effect sizes between 0.55 and 1.25. This study demonstrates the utility of student report data in measuring teachers' classroom practices and in evaluating outcomes of a professional development program.
Creating High Reliability in Health Care Organizations
Pronovost, Peter J; Berenholtz, Sean M; Goeschel, Christine A; Needham, Dale M; Sexton, J Bryan; Thompson, David A; Lubomski, Lisa H; Marsteller, Jill A; Makary, Martin A; Hunt, Elizabeth
2006-01-01
Objective The objective of this paper was to present a comprehensive approach to help health care organizations reliably deliver effective interventions. Context Reliability in healthcare translates into using valid rate-based measures. Yet high reliability organizations have proven that the context in which care is delivered, called organizational culture, also has important influences on patient safety. Model for Improvement Our model to improve reliability, which also includes interventions to improve culture, focuses on valid rate-based measures. This model includes (1) identifying evidence-based interventions that improve the outcome, (2) selecting interventions with the most impact on outcomes and converting to behaviors, (3) developing measures to evaluate reliability, (4) measuring baseline performance, and (5) ensuring patients receive the evidence-based interventions. The comprehensive unit-based safety program (CUSP) is used to improve culture and guide organizations in learning from mistakes that are important, but cannot be measured as rates. Conclusions We present how this model was used in over 100 intensive care units in Michigan to improve culture and eliminate catheter-related blood stream infections—both were accomplished. Our model differs from existing models in that it incorporates efforts to improve a vital component for system redesign—culture, it targets 3 important groups—senior leaders, team leaders, and front line staff, and facilitates change management—engage, educate, execute, and evaluate for planned interventions. PMID:16898981
Multiagent intelligent systems
NASA Astrophysics Data System (ADS)
Krause, Lee S.; Dean, Christopher; Lehman, Lynn A.
2003-09-01
This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of "what if" analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the "family" would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent "publishes" its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.
NASA Astrophysics Data System (ADS)
Branger, E.; Grape, S.; Jansson, P.; Jacobsson Svärd, S.
2018-02-01
The Digital Cherenkov Viewing Device (DCVD) is a tool used by nuclear safeguards inspectors to verify irradiated nuclear fuel assemblies in wet storage based on the recording of Cherenkov light produced by the assemblies. One type of verification involves comparing the measured light intensity from an assembly with a predicted intensity, based on assembly declarations. Crucial for such analyses is the performance of the prediction model used, and recently new modelling methods have been introduced to allow for enhanced prediction capabilities by taking the irradiation history into account, and by including the cross-talk radiation from neighbouring assemblies in the predictions. In this work, the performance of three models for Cherenkov-light intensity prediction is evaluated by applying them to a set of short-cooled PWR 17x17 assemblies for which experimental DCVD measurements and operator-declared irradiation data was available; (1) a two-parameter model, based on total burnup and cooling time, previously used by the safeguards inspectors, (2) a newly introduced gamma-spectrum-based model, which incorporates cycle-wise burnup histories, and (3) the latter gamma-spectrum-based model with the addition to account for contributions from neighbouring assemblies. The results show that the two gamma-spectrum-based models provide significantly higher precision for the measured inventory compared to the two-parameter model, lowering the standard deviation between relative measured and predicted intensities from 15.2 % to 8.1 % respectively 7.8 %. The results show some systematic differences between assemblies of different designs (produced by different manufacturers) in spite of their similar PWR 17x17 geometries, and possible ways are discussed to address such differences, which may allow for even higher prediction capabilities. Still, it is concluded that the gamma-spectrum-based models enable confident verification of the fuel assembly inventory at the currently used detection limit for partial defects, being a 30 % discrepancy between measured and predicted intensities, while some false detection occurs with the two-parameter model. The results also indicate that the gamma-spectrum-based prediction methods are accurate enough that the 30 % discrepancy limit could potentially be lowered.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
ERIC Educational Resources Information Center
Livingstone, Holly A.; Day, Arla L.
2005-01-01
Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…
Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling
ERIC Educational Resources Information Center
Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.
2012-01-01
The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…
Gonioreflectometric properties of metal surfaces
NASA Astrophysics Data System (ADS)
Jaanson, P.; Manoocheri, F.; Mäntynen, H.; Gergely, M.; Widlowski, J.-L.; Ikonen, E.
2014-12-01
Angularly resolved measurements of scattered light from surfaces can provide useful information in various fields of research and industry, such as computer graphics, satellite based Earth observation etc. In practice, empirical or physics-based models are needed to interpolate the measurement results, because a thorough characterization of the surfaces under all relevant conditions may not be feasible. In this work, plain and anodized metal samples were prepared and measured optically for bidirectional reflectance distribution function (BRDF) and mechanically for surface roughness. Two models for BRDF (Torrance-Sparrow model and a polarimetric BRDF model) were fitted to the measured values. A better fit was obtained for plain metal surfaces than for anodized surfaces.
Payment models to support population health management.
Huerta, Timothy R; Hefner, Jennifer L; McAlearney, Ann Scheck
2014-01-01
To survey the policy-driven financial controls currently being used to drive physician change in the care of populations. This paper offers a review of current health care payment models and discusses the impact of each on the potential success of PHM initiatives. We present the benefits of a multi-part model, combining visit-based fee-for-service reimbursement with a monthly "care coordination payment" and a performance-based payment system. A multi-part model removes volume-based incentives and promotes efficiency. However, it is predicated on a pay-for-performance framework that requires standardized measurement. Application of this model is limited due to the current lack of standardized measurement of quality goals that are linked to payment incentives. Financial models dictated by health system payers are inextricably linked to the organization and management of health care. There is a need for better measurements and realistic targets as part of a comprehensive system of measurement assessment that focuses on practice redesign, with the goal of standardizing measurement of the structure and process of redesign. Payment reform is a necessary component of an accurate measure of the associations between practice transformation and outcomes important to both patients and society.
NASA Astrophysics Data System (ADS)
Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han
2017-11-01
Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.
Measurement-based reliability prediction methodology. M.S. Thesis
NASA Technical Reports Server (NTRS)
Linn, Linda Shen
1991-01-01
In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.
Temperature Measurement and Numerical Prediction in Machining Inconel 718.
Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-06-30
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.
A game theory-based trust measurement model for social networks.
Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong
2016-01-01
In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.
NASA Astrophysics Data System (ADS)
Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping
2017-05-01
To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.
van IJsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Baka, N; Van't Klooster, R; Kaptein, B L
2016-08-01
An important measure for the diagnosis and monitoring of knee osteoarthritis is the minimum joint space width (mJSW). This requires accurate alignment of the x-ray beam with the tibial plateau, which may not be accomplished in practice. We investigate the feasibility of a new mJSW measurement method from stereo radiographs using 3D statistical shape models (SSM) and evaluate its sensitivity to changes in the mJSW and its robustness to variations in patient positioning and bone geometry. A validation study was performed using five cadaver specimens. The actual mJSW was varied and images were acquired with variation in the cadaver positioning. For comparison purposes, the mJSW was also assessed from plain radiographs. To study the influence of SSM model accuracy, the 3D mJSW measurement was repeated with models from the actual bones, obtained from CT scans. The SSM-based measurement method was more robust (consistent output for a wide range of input data/consistent output under varying measurement circumstances) than the conventional 2D method, showing that the 3D reconstruction indeed reduces the influence of patient positioning. However, the SSM-based method showed comparable sensitivity to changes in the mJSW with respect to the conventional method. The CT-based measurement was more accurate than the SSM-based measurement (smallest detectable differences 0.55 mm versus 0. 82 mm, respectively). The proposed measurement method is not a substitute for the conventional 2D measurement due to limitations in the SSM model accuracy. However, further improvement of the model accuracy and optimisation technique can be obtained. Combined with the promising options for applications using quantitative information on bone morphology, SSM based 3D reconstructions of natural knees are attractive for further development.Cite this article: E. A. van IJsseldijk, E. R. Valstar, B. C. Stoel, R. G. H. H. Nelissen, N. Baka, R. van't Klooster, B. L. Kaptein. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;320-327. DOI: 10.1302/2046-3758.58.2000626. © 2016 van IJsseldijk et al.
Measuring the Perceived Quality of an AR-Based Learning Application: A Multidimensional Model
ERIC Educational Resources Information Center
Pribeanu, Costin; Balog, Alexandru; Iordache, Dragos Daniel
2017-01-01
Augmented reality (AR) technologies could enhance learning in several ways. The quality of an AR-based educational platform is a combination of key features that manifests in usability, usefulness, and enjoyment for the learner. In this paper, we present a multidimensional model to measure the quality of an AR-based application as perceived by…
Olszewski, Raphael; Szymor, Piotr; Kozakiewicz, Marcin
2014-12-01
Our study aimed to determine the accuracy of a low-cost, paper-based 3D printer by comparing a dry human mandible to its corresponding three-dimensional (3D) model using a 3D measuring arm. One dry human mandible and its corresponding printed model were evaluated. The model was produced using DICOM data from cone beam computed tomography. The data were imported into Maxilim software, wherein automatic segmentation was performed, and the STL file was saved. These data were subsequently analysed, repaired, cut and prepared for printing with netfabb software. These prepared data were used to create a paper-based model of a mandible with an MCor Matrix 300 printer. Seventy-six anatomical landmarks were chosen and measured 20 times on the mandible and the model using a MicroScribe G2X 3D measuring arm. The distances between all the selected landmarks were measured and compared. Only landmarks with a point inaccuracy less than 30% were used in further analyses. The mean absolute difference for the selected 2016 measurements was 0.36 ± 0.29 mm. The mean relative difference was 1.87 ± 3.14%; however, the measurement length significantly influenced the relative difference. The accuracy of the 3D model printed using the paper-based, low-cost 3D Matrix 300 printer was acceptable. The average error was no greater than that measured with other types of 3D printers. The mean relative difference should not be considered the best way to compare studies. The point inaccuracy methodology proposed in this study may be helpful in future studies concerned with evaluating the accuracy of 3D rapid prototyping models. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A
To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Intrathoracic airway measurement: ex-vivo validation
NASA Astrophysics Data System (ADS)
Reinhardt, Joseph M.; Raab, Stephen A.; D'Souza, Neil D.; Hoffman, Eric A.
1997-05-01
High-resolution x-ray CT (HRCT) provides detailed images of the lungs and bronchial tree. HRCT-based imaging and quantitation of peripheral bronchial airway geometry provides a valuable tool for assessing regional airway physiology. Such measurements have been sued to address physiological questions related to the mechanics of airway collapse in sleep apnea, the measurement of airway response to broncho-constriction agents, and to evaluate and track the progression of disease affecting the airways, such as asthma and cystic fibrosis. Significant attention has been paid to the measurements of extra- and intra-thoracic airways in 2D sections from volumetric x-ray CT. A variety of manual and semi-automatic techniques have been proposed for airway geometry measurement, including the use of standardized display window and level settings for caliper measurements, methods based on manual or semi-automatic border tracing, and more objective, quantitative approaches such as the use of the 'half-max' criteria. A recently proposed measurements technique uses a model-based deconvolution to estimate the location of the inner and outer airway walls. Validation using a plexiglass phantom indicates that the model-based method is more accurate than the half-max approach for thin-walled structures. In vivo validation of these airway measurement techniques is difficult because of the problems in identifying a reliable measurement 'gold standard.' In this paper we report on ex vivo validation of the half-max and model-based methods using an excised pig lung. The lung is sliced into thin sections of tissue and scanned using an electron beam CT scanner. Airways of interest are measured from the CT images, and also measured with using a microscope and micrometer to obtain a measurement gold standard. The result show no significant difference between the model-based measurements and the gold standard; while the half-max estimates exhibited a measurement bias and were significantly different than the gold standard.
NASA Astrophysics Data System (ADS)
Hashimoto, Osamu; Mizokami, Osamu
The method for measuring radar cross section (RCS) based on Range-Doppler Imaging is discussed. In this method, the measured targets are rotated and the Doppler frequencies caused by each scattering element along the targets are analyzed by FFT. Using this method, each scattered power peak along the flying model is measured. It is found that each part of the RCS of a flying model can be measured and its RCS of a main wing (about 46 dB/sq cm) is greater than of its body (about 20-30 dB/sq cm).
ERIC Educational Resources Information Center
Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju
2014-01-01
The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…
Model-Based, Noninvasive Monitoring of Intracranial Pressure
2013-07-01
patients. A physiologically based model relates ICP to simultaneously measured waveforms of arterial blood pressure ( ABP ), obtained via radial... ABP and CBFV are currently measured as the clinical standard of care. The project’s major accomplishments include: assembling a suitable system for...synchronized arterial blood pressure ( ABP ) and cerebral blood flow velocity (CBFV) waveform measurements that can be obtained quite routinely. Our processing
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
NASA Astrophysics Data System (ADS)
Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.
2012-12-01
Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite retrievals. Evaluation results are assessed against recommended criteria and peer studies in the literature. Further analysis is conducted, based upon these assessments, to discover likely errors in model inputs and potential deficiencies in the model itself. Correlations as well as differences in input errors and model deficiencies revealed by ground-level measurements versus satellite observations are discussed. Additionally, sensitivity analyses are employed to investigate errors in emission-rate estimates using either ground-level measurements or satellite retrievals, and the results are compared against each other considering observational uncertainties. Recommendations are made for how to effectively utilize satellite retrievals in regulatory air quality modeling.
Weil, Joyce; Hutchinson, Susan R; Traxler, Karen
2014-11-01
Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.
Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A
2017-01-01
Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models.
Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A
2017-01-01
Purpose Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Methods Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. Results All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. Conclusion The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models. PMID:29355212
Xuan, Ziming; Chaloupka, Frank J; Blanchette, Jason G; Nguyen, Thien H; Heeren, Timothy C; Nelson, Toben F; Naimi, Timothy S
2015-03-01
U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (-0.09, P = 0.02 versus -0.005, P = 0.63) and price elasticity (-1.40, P < 0.01 versus -0.76, P = 0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P = 0.11), while the R-squares for models rely only on volume-based tax declined (P < 0.0). Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. © 2014 Society for the Study of Addiction.
Xuan, Ziming; Chaloupka, Frank J.; Blanchette, Jason G.; Nguyen, Thien H.; Heeren, Timothy C.; Nelson, Toben F.; Naimi, Timothy S.
2015-01-01
Aims U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Design Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. Findings In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (−0.09, P=0.02 versus −0.005, P=0.63) and price elasticity (−1.40, P<0.01 versus −0.76, P=0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P=0.11), while the R-squares for models rely only on volume-based tax declined (P<0.01). Conclusions Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. PMID:25428795
Magneto-mechanical modeling of electrical steel sheets
NASA Astrophysics Data System (ADS)
Aydin, U.; Rasilo, P.; Martin, F.; Singh, D.; Daniel, L.; Belahcen, A.; Rekik, M.; Hubert, O.; Kouhia, R.; Arkkio, A.
2017-10-01
A simplified multiscale approach and a Helmholtz free energy based approach for modeling the magneto-mechanical behavior of electrical steel sheets are compared. The models are identified from uniaxial magneto-mechanical measurements of two different electrical steel sheets which show different magneto-elastic behavior. Comparison with the available measurement data of the materials shows that both models successfully model the magneto-mechanical behavior of one of the studied materials, whereas for the second material only the Helmholtz free energy based approach is successful.
Models of filter-based particle light absorption measurements
NASA Astrophysics Data System (ADS)
Hamasha, Khadeejeh M.
Light absorption by aerosol is very important in the visible, near UN, and near I.R region of the electromagnetic spectrum. Aerosol particles in the atmosphere have a great influence on the flux of solar energy, and also impact health in a negative sense when they are breathed into lungs. Aerosol absorption measurements are usually performed by filter-based methods that are derived from the change in light transmission through a filter where particles have been deposited. These methods suffer from interference between light-absorbing and light-scattering aerosol components. The Aethalometer is the most commonly used filter-based instrument for aerosol light absorption measurement. This dissertation describes new understanding of aerosol light absorption obtained by the filter method. The theory uses a multiple scattering model for the combination of filter and particle optics. The theory is evaluated using Aethalometer data from laboratory and ambient measurements in comparison with photoacoustic measurements of aerosol light absorption. Two models were developed to calculate aerosol light absorption coefficients from the Aethalometer data, and were compared to the in-situ aerosol light absorption coefficients. The first is an approximate model and the second is a "full" model. In the approximate model two extreme cases of aerosol optics were used to develop a model-based calibration scheme for the 7-wavelength Aethalometer. These cases include those of very strong scattering aerosols (Ammonium sulfate sample) and very absorbing aerosols (kerosene soot sample). The exponential behavior of light absorption in the strong multiple scattering limit is shown to be the square root of the total absorption optical depth rather than linear with optical depth as is commonly assumed with Beer's law. 2-stream radiative transfer theory was used to develop the full model to calculate the aerosol light absorption coefficients from the Aethalometer data. This comprehensive model allows for studying very general cases of particles of various sizes embedded on arbitrary filter media. Application of this model to the Reno Aerosol Optics Study (Laboratory data) shows that the aerosol light absorption coefficients are about half of the Aethalometer attenuation coefficients, and there is a reasonable agreement between the model calculated absorption coefficients at 521 nm and the measured photoacoustic absorption coefficients at 532 nm. For ambient data obtained during the Las Vegas study, it shows that the model absorption coefficients at 521 nm are larger than the photoacoustic coefficients at 532 nm. Use of the 2-stream model shows that particle penetration depth into the filter has a strong influence on the interpretation of filter-based aerosol light absorption measurements. This is likely explanation for the difference found between model results for filter-based aerosol light absorption and those from photoacoustic measurements for ambient and laboratory aerosol.
Research of autonomous celestial navigation based on new measurement model of stellar refraction
NASA Astrophysics Data System (ADS)
Yu, Cong; Tian, Hong; Zhang, Hui; Xu, Bo
2014-09-01
Autonomous celestial navigation based on stellar refraction has attracted widespread attention for its high accuracy and full autonomy.In this navigation method, establishment of accurate stellar refraction measurement model is the fundament and key issue to achieve high accuracy navigation. However, the existing measurement models are limited due to the uncertainty of atmospheric parameters. Temperature, pressure and other factors which affect the stellar refraction within the height of earth's stratosphere are researched, and the varying model of atmosphere with altitude is derived on the basis of standard atmospheric data. Furthermore, a novel measurement model of stellar refraction in a continuous range of altitudes from 20 km to 50 km is produced by modifying the fixed altitude (25 km) measurement model, and equation of state with the orbit perturbations is established, then a simulation is performed using the improved Extended Kalman Filter. The results show that the new model improves the navigation accuracy, which has a certain practical application value.
Augment clinical measurement using a constraint-based esophageal model
NASA Astrophysics Data System (ADS)
Kou, Wenjun; Acharya, Shashank; Kahrilas, Peter; Patankar, Neelesh; Pandolfino, John
2017-11-01
Quantifying the mechanical properties of the esophageal wall is crucial to understanding impairments of trans-esophageal flow characteristic of several esophageal diseases. However, these data are unavailable owing to technological limitations of current clinical diagnostic instruments that instead display esophageal luminal cross sectional area based on intraluminal impedance change. In this work, we developed an esophageal model to predict bolus flow and the wall property based on clinical measurements. The model used the constraint-based immersed-boundary method developed previously by our group. Specifically, we first approximate the time-dependent wall geometry based on impedance planimetry data on luminal cross sectional area. We then fed these along with pressure data into the model and computed wall tension based on simulated pressure and flow fields, and the material property based on the strain-stress relationship. As examples, we applied this model to augment FLIP (Functional Luminal Imaging Probe) measurements in three clinical cases: a normal subject, achalasia, and eosinophilic esophagitis (EoE). Our findings suggest that the wall stiffness was greatest in the EoE case, followed by the achalasia case, and then the normal. This is supported by NIH Grant R01 DK56033 and R01 DK079902.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluating model accuracy for model-based reasoning
NASA Technical Reports Server (NTRS)
Chien, Steve; Roden, Joseph
1992-01-01
Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
Study of indoor radon distribution using measurements and CFD modeling.
Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K
2014-10-01
Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Temperature Measurement and Numerical Prediction in Machining Inconel 718
Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-01-01
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312
Measuring Equity in Access to Pharmaceutical Services Using Concentration Curve; Model Development.
Davari, Majid; Khorasani, Elahe; Bakhshizade, Zahra; Jafarian Jazi, Marzie; Ghaffari Darab, Mohsen; Maracy, Mohammad Reza
2015-01-01
This paper has two objectives. First, it establishes a model for scoring the access to pharmaceutical services. Second, it develops a model for measuring socioeconomic indicators independent of the time and place of study. These two measures are used for measuring equity in access to pharmaceutical services using concentration curve. We prepared an open-ended questionnaire and distributed it to academic experts to get their ideas to form access indicators and assign score to each indicator based on the pharmaceutical system. An extensive literature review was undertaken for the selection of indicators in order to determine the socioeconomic status (SES) of individuals. Experts' opinions were also considered for scoring these indicators. These indicators were weighted by the Stepwise Adoption of Weights and were used to develop a model for measuring SES independent of the time and place of study. Nine factors were introduced for assessing the access to pharmaceutical services, based on pharmaceutical systems in middle-income countries. Five indicators were selected for determining the SES of individuals. A model for income classification based on poverty line was established. Likewise, a model for scoring home status based on national minimum wage was introduced. In summary, five important findings emerged from this study. These findings may assist researchers in measuring equity in access to pharmaceutical services and also could help them to apply a model for determining SES independent of the time and place of study. These also could provide a good opportunity for researchers to compare the results of various studies in a reasonable way; particularly in middle-income countries.
A COMBINED SPECTROSCOPIC AND PHOTOMETRIC STELLAR ACTIVITY STUDY OF EPSILON ERIDANI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giguere, Matthew J.; Fischer, Debra A.; Zhang, Cyril X. Y.
2016-06-20
We present simultaneous ground-based radial velocity (RV) measurements and space-based photometric measurements of the young and active K dwarf Epsilon Eridani. These measurements provide a data set for exploring methods of identifying and ultimately distinguishing stellar photospheric velocities from Keplerian motion. We compare three methods we have used in exploring this data set: Dalmatian, an MCMC spot modeling code that fits photometric and RV measurements simultaneously; the FF′ method, which uses photometric measurements to predict the stellar activity signal in simultaneous RV measurements; and H α analysis. We show that our H α measurements are strongly correlated with the Microvariabilitymore » and Oscillations of STars telescope ( MOST ) photometry, which led to a promising new method based solely on the spectroscopic observations. This new method, which we refer to as the HH′ method, uses H α measurements as input into the FF′ model. While the Dalmatian spot modeling analysis and the FF′ method with MOST space-based photometry are currently more robust, the HH′ method only makes use of one of the thousands of stellar lines in the visible spectrum. By leveraging additional spectral activity indicators, we believe the HH′ method may prove quite useful in disentangling stellar signals.« less
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha
2013-09-01
Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
A rough set-based measurement model study on high-speed railway safety operation.
Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun
2018-01-01
Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.
Evaluation of a watershed model for estimating daily flow using limited flow measurements
USDA-ARS?s Scientific Manuscript database
The Soil and Water Assessment Tool (SWAT) model was evaluated for estimation of continuous daily flow based on limited flow measurements in the Upper Oyster Creek (UOC) watershed. SWAT was calibrated against limited measured flow data and then validated. The Nash-Sutcliffe model Efficiency (NSE) and...
Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas
2016-05-01
We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mondy, Lisa Ann; Rao, Rekha Ranjana; Shelden, Bion
We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions,more » following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.« less
NASA Technical Reports Server (NTRS)
Drzewiecki, R. F.; Foust, J. W.
1976-01-01
A model test program was conducted to determine heat transfer and pressure distributions in the base region of the space shuttle vehicle during simulated launch trajectory conditions of Mach 4.5 and pressure altitudes between 90,000 and 210,000 feet. Model configurations with and without the solid propellant booster rockets were examined to duplicate pre- and post-staging vehicle geometries. Using short duration flow techniques, a tube wind tunnel provided supersonic flow over the model. Simultaneously, combustion generated exhaust products reproduced the gasdynamic and thermochemical structure of the main vehicle engine plumes. Heat transfer and pressure measurements were made at numerous locations on the base surfaces of the 19-OTS space shuttle model with high response instrumentation. In addition, measurements of base recovery temperature were made indirectly by using dual fine wire and resistance thermometers and by extrapolating heat transfer measurements.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-07-01
Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-01-01
Abstract. Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy. PMID:26839904
A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations
NASA Astrophysics Data System (ADS)
Tan, H.; Chandra, C. V.; Chen, H.
2016-12-01
Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge network.
NASA Astrophysics Data System (ADS)
Saturnino, Diana; Langlais, Benoit; Civet, François; Thébault, Erwan; Mandea, Mioara
2015-06-01
We describe the main field and secular variation candidate models for the 12th generation of the International Geomagnetic Reference Field model. These two models are derived from the same parent model, in which the main field is extrapolated to epoch 2015.0 using its associated secular variation. The parent model is exclusively based on measurements acquired by the European Space Agency Swarm mission between its launch on 11/22/2013 and 09/18/2014. It is computed up to spherical harmonic degree and order 25 for the main field, 13 for the secular variation, and 2 for the external field. A selection on local time rather than on true illumination of the spacecraft was chosen in order to keep more measurements. Data selection based on geomagnetic indices was used to minimize the external field contributions. Measurements were screened and outliers were carefully removed. The model uses magnetic field intensity measurements at all latitudes and magnetic field vector measurements equatorward of 50° absolute quasi-dipole magnetic latitude. A second model using only the vertical component of the measured magnetic field and the total intensity was computed. This companion model offers a slightly better fit to the measurements. These two models are compared and discussed.We discuss in particular the quality of the model which does not use the full vector measurements and underline that this approach may be used when only partial directional information is known. The candidate models and their associated companion models are retrospectively compared to the adopted IGRF which allows us to criticize our own choices.
Performance model for grid-connected photovoltaic inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyson, William Earl; Galbraith, Gary M.; King, David L.
2007-09-01
This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less
Differential item functioning magnitude and impact measures from item response theory models.
Kleinman, Marjorie; Teresi, Jeanne A
2016-01-01
Measures of magnitude and impact of differential item functioning (DIF) at the item and scale level, respectively are presented and reviewed in this paper. Most measures are based on item response theory models. Magnitude refers to item level effect sizes, whereas impact refers to differences between groups at the scale score level. Reviewed are magnitude measures based on group differences in the expected item scores and impact measures based on differences in the expected scale scores. The similarities among these indices are demonstrated. Various software packages are described that provide magnitude and impact measures, and new software presented that computes all of the available statistics conveniently in one program with explanations of their relationships to one another.
Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Suttles, J. T.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.
ERIC Educational Resources Information Center
Kaya, Yasemin; Leite, Walter L.
2017-01-01
Cognitive diagnosis models are diagnostic models used to classify respondents into homogenous groups based on multiple categorical latent variables representing the measured cognitive attributes. This study aims to present longitudinal models for cognitive diagnosis modeling, which can be applied to repeated measurements in order to monitor…
NASA Astrophysics Data System (ADS)
Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco
2018-05-01
Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.
Patterns of correlation between vehicle occupant seat pressure and anthropometry.
Paul, Gunther; Daniell, Nathan; Fraysse, François
2012-01-01
Seat pressure is known as a major factor of seat comfort in vehicles. In passenger vehicles, there is lacking research into the seat comfort of rear seat occupants. As accurate seat pressure measurement requires significant effort, simulation of seat pressure is evolving as a preferred method. However, analytic methods are based on complex finite element modeling and therefore are time consuming and involve high investment. Based on accurate anthropometric measurements of 64 male subjects and outboard rear seat pressure measurements in three different passenger vehicles, this study investigates if a set of parameters derived from seat pressure mapping are sensitive enough to differentiate between different seats and whether they correlate with anthropometry in linear models. In addition to the pressure map analysis, H-Points were measured with a coordinate measurement system based on palpated body landmarks and the range of H-Point locations in the three seats is provided. It was found that for the cushion, cushion contact area and cushion front area/force could be modeled by subject anthropometry, while only seatback contact area could be modeled based on anthropometry for all three vehicles. Major differences were found between the vehicles for other parameters.
NASA Technical Reports Server (NTRS)
Young, Sun-Woo; Carmichael, Gregory R.
1994-01-01
Tropospheric ozone production and transport in mid-latitude eastern Asia is studied. Data analysis of surface-based ozone measurements in Japan and satellite-based tropospheric column measurements of the entire western Pacific Rim are combined with results from three-dimensional model simulations to investigate the diurnal, seasonal and long-term variations of ozone in this region. Surface ozone measurements from Japan show distinct seasonal variation with a spring peak and summer minimum. Satellite studies of the entire tropospheric column of ozone show high concentrations in both the spring and summer seasons. Finally, preliminary model simulation studies show good agreement with observed values.
Uncertainty evaluation of dead zone of diagnostic ultrasound equipment
NASA Astrophysics Data System (ADS)
Souza, R. M.; Alvarenga, A. V.; Braz, D. S.; Petrella, L. I.; Costa-Felix, R. P. B.
2016-07-01
This paper presents a model for evaluating measurement uncertainty of a feature used in the assessment of ultrasound images: dead zone. The dead zone was measured by two technicians of the INMETRO's Laboratory of Ultrasound using a phantom and following the standard IEC/TS 61390. The uncertainty model was proposed based on the Guide to the Expression of Uncertainty in Measurement. For the tested equipment, results indicate a dead zone of 1.01 mm, and based on the proposed model, the expanded uncertainty was 0.17 mm. The proposed uncertainty model contributes as a novel way for metrological evaluation of diagnostic imaging by ultrasound.
Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer
Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.
2012-01-01
Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030
Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.
Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H
2012-11-13
Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta.
Hein, Misty J.; Waters, Martha A.; Ruder, Avima M.; Stenzel, Mark R.; Blair, Aaron; Stewart, Patricia A.
2010-01-01
Objectives: Occupational exposure assessment for population-based case–control studies is challenging due to the wide variety of industries and occupations encountered by study participants. We developed and evaluated statistical models to estimate the intensity of exposure to three chlorinated solvents—methylene chloride, 1,1,1-trichloroethane, and trichloroethylene—using a database of air measurement data and associated exposure determinants. Methods: A measurement database was developed after an extensive review of the published industrial hygiene literature. The database of nearly 3000 measurements or summary measurements included sample size, measurement characteristics (year, duration, and type), and several potential exposure determinants associated with the measurements: mechanism of release (e.g. evaporation), process condition, temperature, usage rate, type of ventilation, location, presence of a confined space, and proximity to the source. The natural log-transformed measurement levels in the exposure database were modeled as a function of the measurement characteristics and exposure determinants using maximum likelihood methods. Assuming a single lognormal distribution of the measurements, an arithmetic mean exposure intensity level was estimated for each unique combination of exposure determinants and decade. Results: The proportions of variability in the measurement data explained by the modeled measurement characteristics and exposure determinants were 36, 38, and 54% for methylene chloride, 1,1,1-trichloroethane, and trichloroethylene, respectively. Model parameter estimates for the exposure determinants were in the anticipated direction. Exposure intensity estimates were plausible and exhibited internal consistency, but the ability to evaluate validity was limited. Conclusions: These prediction models can be used to estimate chlorinated solvent exposure intensity for jobs reported by population-based case–control study participants that have sufficiently detailed information regarding the exposure determinants. PMID:20418277
A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks
Wang, Ping; Zhang, Lin; Li, Victor O. K.
2013-01-01
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708
A stratified acoustic model accounting for phase shifts for underwater acoustic networks.
Wang, Ping; Zhang, Lin; Li, Victor O K
2013-05-13
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys
NASA Astrophysics Data System (ADS)
Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.
2016-12-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
NASA Astrophysics Data System (ADS)
Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo
2018-02-01
Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.
Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective
ERIC Educational Resources Information Center
Ardoin, Scott P.; Roof, Claire M.; Klubnick, Cynthia; Carfolite, Jessica
2008-01-01
Curriculum-based measurement Reading (CBM-R) is an assessment procedure used to evaluate students' relative performance compared to peers and to evaluate their growth in reading. Within the response to intervention (RtI) model, CBM-R data are plotted in time series fashion as a means modeling individual students' response to varying levels of…
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
Modelling of induced electric fields based on incompletely known magnetic fields
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro
2017-08-01
Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Suttles, J. T.
1977-01-01
One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.
Information and complexity measures for hydrologic model evaluation
USDA-ARS?s Scientific Manuscript database
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
ERIC Educational Resources Information Center
Carman, Carol A.
2011-01-01
One of the underutilized tools in gifted identification is personality-based measures. A multiple confirmatory factor analysis was utilized to examine the relationships between traditional identification methods and personality-based measures. The pattern of correlations indicated this model could be measuring two constructs, one related to…
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Effect of quality chronic disease management for alcohol and drug dependence on addiction outcomes.
Kim, Theresa W; Saitz, Richard; Cheng, Debbie M; Winter, Michael R; Witas, Julie; Samet, Jeffrey H
2012-12-01
We examined the effect of the quality of primary care-based chronic disease management (CDM) for alcohol and/or other drug (AOD) dependence on addiction outcomes. We assessed quality using (1) a visit frequency based measure and (2) a self-reported assessment measuring alignment with the chronic care model. The visit frequency based measure had no significant association with addiction outcomes. The self-reported measure of care-when care was at a CDM clinic-was associated with lower drug addiction severity. The self-reported assessment of care from any healthcare source (CDM clinic or elsewhere) was associated with lower alcohol addiction severity and abstinence. These findings suggest that high quality CDM for AOD dependence may improve addiction outcomes. Quality measures based upon alignment with the chronic care model may better capture features of effective CDM care than a visit frequency measure. Copyright © 2012 Elsevier Inc. All rights reserved.
Multilevel Modeling of Social Segregation
ERIC Educational Resources Information Center
Leckie, George; Pillinger, Rebecca; Jones, Kelvyn; Goldstein, Harvey
2012-01-01
The traditional approach to measuring segregation is based upon descriptive, non-model-based indices. A recently proposed alternative is multilevel modeling. The authors further develop the argument for a multilevel modeling approach by first describing and expanding upon its notable advantages, which include an ability to model segregation at a…
NASA Astrophysics Data System (ADS)
Han, Yingying; Gong, Pu; Zhou, Xiang
2016-02-01
In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.
Application of cognitive diagnosis models to competency-based situational judgment tests.
García, Pablo Eduardo; Olea, Julio; De la Torre, Jimmy
2014-01-01
Profiling of jobs in terms of competency requirements has increasingly been applied in many organizational settings. Testing these competencies through situational judgment tests (SJTs) leads to validity problems because it is not usually clear which constructs SJTs measure. The primary purpose of this paper is to evaluate whether the application of cognitive diagnosis models (CDM) to competency-based SJTs can ascertain the underlying competencies measured by the items, and whether these competencies can be estimated precisely. The generalized deterministic inputs, noisy "and" gate (G-DINA) model was applied to 26 situational judgment items measuring professional competencies based on the great eight model. These items were applied to 485 employees of a Spanish financial company. The fit of the model to the data and the convergent validity between the estimated competencies and personality dimensions were examined. The G-DINA showed a good fit to the data and the estimated competency factors, adapting and coping and interacting and presenting were positively related to emotional stability and extraversion, respectively. This work indicates that CDM can be a useful tool when measuring professional competencies through SJTs. CDM can clarify the competencies being measured and provide precise estimates of these competencies.
Calster, Ben Van; Vickers, Andrew J; Pencina, Michael J; Baker, Stuart G; Timmerman, Dirk; Steyerberg, Ewout W
2014-01-01
BACKGROUND For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the ROC curve (ΔAUC). The Net Reclassification Improvement (NRI) is increasingly popular to compare predictions with one or more risk thresholds, but decision-analytic approaches have also been proposed. OBJECTIVE We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not. METHODS We considered the NRI and three utility-based measures that take misclassification costs into account: difference in Net Benefit (ΔNB), difference in Relative Utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of ovarian cancer (prevalence 28%). RESULTS The three utility-based measures appear transformations of each other, and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P and the obtained differences in sensitivity and specificity of the two models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures. CONCLUSIONS The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models, since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures. PMID:23313931
Chowdhury, Amor; Sarjaš, Andrej
2016-01-01
The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197
Chowdhury, Amor; Sarjaš, Andrej
2016-09-15
The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.
A two-stage DEA approach for environmental efficiency measurement.
Song, Malin; Wang, Shuhong; Liu, Wei
2014-05-01
The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.
Gao, Yue-Ming; Wu, Zhu-Mei; Pun, Sio-Hang; Mak, Peng-Un; Vai, Mang-I; Du, Min
2016-04-02
Existing research on human channel modeling of galvanic coupling intra-body communication (IBC) is primarily focused on the human body itself. Although galvanic coupling IBC is less disturbed by external influences during signal transmission, there are inevitable factors in real measurement scenarios such as the parasitic impedance of electrodes, impedance matching of the transceiver, etc. which might lead to deviations between the human model and the in vivo measurements. This paper proposes a field-circuit finite element method (FEM) model of galvanic coupling IBC in a real measurement environment to estimate the human channel gain. First an anisotropic concentric cylinder model of the electric field intra-body communication for human limbs was developed based on the galvanic method. Then the electric field model was combined with several impedance elements, which were equivalent in terms of parasitic impedance of the electrodes, input and output impedance of the transceiver, establishing a field-circuit FEM model. The results indicated that a circuit module equivalent to external factors can be added to the field-circuit model, which makes this model more complete, and the estimations based on the proposed field-circuit are in better agreement with the corresponding measurement results.
Density-driven transport of gas phase chemicals in unsaturated soils
NASA Astrophysics Data System (ADS)
Fen, Chiu-Shia; Sun, Yong-tai; Cheng, Yuen; Chen, Yuanchin; Yang, Whaiwan; Pan, Changtai
2018-01-01
Variations of gas phase density are responsible for advective and diffusive transports of organic vapors in unsaturated soils. Laboratory experiments were conducted to explore dense gas transport (sulfur hexafluoride, SF6) from different source densities through a nitrogen gas-dry soil column. Gas pressures and SF6 densities at transient state were measured along the soil column for three transport configurations (horizontal, vertically upward and vertically downward transport). These measurements and others reported in the literature were compared with simulation results obtained from two models based on different diffusion approaches: the dusty gas model (DGM) equations and a Fickian-type molar fraction-based diffusion expression. The results show that the DGM and Fickian-based models predicted similar dense gas density profiles which matched the measured data well for horizontal transport of dense gas at low to high source densities, despite the pressure variations predicted in the soil column were opposite to the measurements. The pressure evolutions predicted by both models were in trend similar to the measured ones for vertical transport of dense gas. However, differences between the dense gas densities predicted by the DGM and Fickian-based models were discernible for vertically upward transport of dense gas even at low source densities, as the DGM-based predictions matched the measured data better than the Fickian results did. For vertically downward transport, the dense gas densities predicted by both models were not greatly different from our experimental measurements, but substantially greater than the observations obtained from the literature, especially at high source densities. Further research will be necessary for exploring factors affecting downward transport of dense gas in soil columns. Use of the measured data to compute flux components of SF6 showed that the magnitudes of diffusive flux component based on the Fickian-type diffusion expressions in terms of molar concentration, molar fraction and mass density fraction gradient were almost the same. However, they were greater than the result computed with the mass fraction gradient for > 24% and the DGM-based result for more than one time. As a consequence, the DGM-based total flux of SF6 was in magnitude greatly less than the Fickian result not only for horizontal transport (diffusion-dominating) but also for vertical transport (advection and diffusion) of dense gas. Particularly, the Fickian-based total flux was more than two times in magnitude as much as the DGM result for vertically upward transport of dense gas.
Citardi, Martin J.; Herrmann, Brian; Hollenbeak, Chris S.; Stack, Brendan C.; Cooper, Margaret; Bucholz, Richard D.
2001-01-01
Traditionally, cadaveric studies and plain-film cephalometrics provided information about craniomaxillofacial proportions and measurements; however, advances in computer technology now permit software-based review of computed tomography (CT)-based models. Distances between standardized anatomic points were measured on five dried human skulls with standard scientific calipers (Geneva Gauge, Albany, NY) and through computer workstation (StealthStation 2.6.4, Medtronic Surgical Navigation Technology, Louisville, CO) review of corresponding CT scans. Differences in measurements between the caliper and CT model were not statistically significant for each parameter. Measurements obtained by computer workstation CT review of the cranial skull base are an accurate representation of actual bony anatomy. Such information has important implications for surgical planning and clinical research. ImagesFigure 1Figure 2Figure 3 PMID:17167599
Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Gotseff, Peter
2013-12-01
This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less
A physiologically based toxicokinetic model for lake trout (Salvelinus namaycush).
Lien, G J; McKim, J M; Hoffman, A D; Jenson, C T
2001-01-01
A physiologically based toxicokinetic (PB-TK) model for fish, incorporating chemical exchange at the gill and accumulation in five tissue compartments, was parameterized and evaluated for lake trout (Salvelinus namaycush). Individual-based model parameterization was used to examine the effect of natural variability in physiological, morphological, and physico-chemical parameters on model predictions. The PB-TK model was used to predict uptake of organic chemicals across the gill and accumulation in blood and tissues in lake trout. To evaluate the accuracy of the model, a total of 13 adult lake trout were exposed to waterborne 1,1,2,2-tetrachloroethane (TCE), pentachloroethane (PCE), and hexachloroethane (HCE), concurrently, for periods of 6, 12, 24 or 48 h. The measured and predicted concentrations of TCE, PCE and HCE in expired water, dorsal aortic blood and tissues were generally within a factor of two, and in most instances much closer. Variability noted in model predictions, based on the individual-based model parameterization used in this study, reproduced variability observed in measured concentrations. The inference is made that parameters influencing variability in measured blood and tissue concentrations of xenobiotics are included and accurately represented in the model. This model contributes to a better understanding of the fundamental processes that regulate the uptake and disposition of xenobiotic chemicals in the lake trout. This information is crucial to developing a better understanding of the dynamic relationships between contaminant exposure and hazard to the lake trout.
NASA Astrophysics Data System (ADS)
Putov, A. V.; Kopichev, M. M.; Ignatiev, K. V.; Putov, V. V.; Stotckaia, A. D.
2017-01-01
In this paper it is considered a discussion of the technique that realizes a brand new method of runway friction coefficient measurement based upon the proposed principle of measuring wheel braking control for the imitation of antilock braking modes that are close to the real braking modes of the aircraft chassis while landing that are realized by the aircraft anti-skid systems. Also here is the description of the model of towed measuring device that realizes a new technique of runway friction coefficient measuring, based upon the measuring wheel braking control principle. For increasing the repeatability accuracy of electromechanical braking imitation system the sideslip (brake) adaptive control system is proposed. Based upon the Burkhard model and additive random processes several mathematical models were created that describes the friction coefficient arrangement along the airstrip with different qualitative adjectives. Computer models of friction coefficient measuring were designed and first in the world the research of correlation between the friction coefficient measuring results and shape variations, intensity and cycle frequency of the measuring wheel antilock braking modes. The sketch engineering documentation was designed and prototype of the latest generation measuring device is ready to use. The measuring device was tested on the autonomous electromechanical examination laboratory treadmill bench. The experiments approved effectiveness of method of imitation the antilock braking modes for solving the problem of correlation of the runway friction coefficient measuring.
Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C
2013-12-21
Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.
4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Feza; Arikan, Orhan
2016-07-01
Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Empirical Measurement and Model Validation of Infrared Spectra of Contaminated Surfaces
NASA Astrophysics Data System (ADS)
Archer, Sean
The goal of this thesis was to validate predicted infrared spectra of liquid contaminated surfaces from a micro-scale bi-directional reflectance distribution function (BRDF) model through the use of empirical measurement. Liquid contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Image and Remote Sensing Image Generation (DIRSIG) model utilizes radiative transfer modeling to generate synthetic imagery for a variety of applications. Aside from DIRSIG, a micro-scale model known as microDIRSIG has been developed as a rigorous ray tracing physics-based model that could predict the BRDF of geometric surfaces that are defined as micron to millimeter resolution facets. The model offers an extension from the conventional BRDF models by allowing contaminants to be added as geometric objects to a micro-facet surface. This model was validated through the use of Fourier transform infrared spectrometer measurements. A total of 18 different substrate and contaminant combinations were measured and compared against modeled outputs. The substrates used in this experiment were wood and aluminum that contained three different paint finishes. The paint finishes included no paint, Krylon ultra-flat black, and Krylon glossy black. A silicon based oil (SF96) was measured out and applied to each surface to create three different contamination cases for each surface. Radiance in the longwave infrared region of the electromagnetic spectrum was measured by a Design and Prototypes (D&P) Fourier transform infrared spectrometer and a Physical Sciences Inc. Adaptive Infrared Imaging Spectroradiometer (AIRIS). The model outputs were compared against the measurements quantitatively in both the emissivity and radiance domains. A temperature emissivity separation (TES) algorithm had to be applied to the measured radiance spectra for comparison with the microDIRSIG predicted emissivity spectra. The model predicted emissivity spectra was also forward modeled through a DIRSIG simulation for comparisons to the radiance measurements. The results showed a promising agreement for homogeneous surfaces with liquid contamination that could be well characterized geometrically. Limitations arose in substrates that were modeled as homogeneous surfaces, but had spatially varying artifacts due to uncertainties with contaminant and surface interactions. There is high desire for accurate physics based modeling of liquid contaminated surfaces and this validation framework may be extended to include a wider array of samples for more realistic natural surfaces that are often found in real world scenarios.
Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa
2015-01-01
Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.
Modeling of the metallic port in breast tissue expanders for photon radiotherapy.
Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui
2018-03-30
The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Underwater 3D Surface Measurement Using Fringe Projection Based Scanning Devices
Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Kühmstedt, Peter; Notni, Gunther
2015-01-01
In this work we show the principle of optical 3D surface measurements based on the fringe projection technique for underwater applications. The challenges of underwater use of this technique are shown and discussed in comparison with the classical application. We describe an extended camera model which takes refraction effects into account as well as a proposal of an effective, low-effort calibration procedure for underwater optical stereo scanners. This calibration technique combines a classical air calibration based on the pinhole model with ray-based modeling and requires only a few underwater recordings of an object of known length and a planar surface. We demonstrate a new underwater 3D scanning device based on the fringe projection technique. It has a weight of about 10 kg and the maximal water depth for application of the scanner is 40 m. It covers an underwater measurement volume of 250 mm × 200 mm × 120 mm. The surface of the measurement objects is captured with a lateral resolution of 150 μm in a third of a second. Calibration evaluation results are presented and examples of first underwater measurements are given. PMID:26703624
Song, Zirui; Rose, Sherri; Chernew, Michael E.; Safran, Dana Gelb
2018-01-01
As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations. PMID:28069849
Onega, Tracy; Beaber, Elisabeth F; Sprague, Brian L; Barlow, William E; Haas, Jennifer S; Tosteson, Anna N A; D Schnall, Mitchell; Armstrong, Katrina; Schapira, Marilyn M; Geller, Berta; Weaver, Donald L; Conant, Emily F
2014-10-01
Breast cancer screening holds a prominent place in public health, health care delivery, policy, and women's health care decisions. Several factors are driving shifts in how population-based breast cancer screening is approached, including advanced imaging technologies, health system performance measures, health care reform, concern for "overdiagnosis," and improved understanding of risk. Maximizing benefits while minimizing the harms of screening requires moving from a "1-size-fits-all" guideline paradigm to more personalized strategies. A refined conceptual model for breast cancer screening is needed to align women's risks and preferences with screening regimens. A conceptual model of personalized breast cancer screening is presented herein that emphasizes key domains and transitions throughout the screening process, as well as multilevel perspectives. The key domains of screening awareness, detection, diagnosis, and treatment and survivorship are conceptualized to function at the level of the patient, provider, facility, health care system, and population/policy arena. Personalized breast cancer screening can be assessed across these domains with both process and outcome measures. Identifying, evaluating, and monitoring process measures in screening is a focus of a National Cancer Institute initiative entitled PROSPR (Population-based Research Optimizing Screening through Personalized Regimens), which will provide generalizable evidence for a risk-based model of breast cancer screening, The model presented builds on prior breast cancer screening models and may serve to identify new measures to optimize benefits-to-harms tradeoffs in population-based screening, which is a timely goal in the era of health care reform. © 2014 American Cancer Society.
Interpreting Variance Components as Evidence for Reliability and Validity.
ERIC Educational Resources Information Center
Kane, Michael T.
The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…
NASA Astrophysics Data System (ADS)
Hirtl, Marcus; Mantovani, Simone; Krüger, Bernd C.; Triebnig, Gerhard; Flandorfer, Claudia
2013-04-01
Air quality is a key element for the well-being and quality of life of European citizens. Air pollution measurements and modeling tools are essential for assessment of air quality according to EU legislation. The responsibilities of ZAMG as the national weather service of Austria include the support of the federal states and the public in questions connected to the protection of the environment in the frame of advisory and counseling services as well as expert opinions. The Air Quality model for Austria (AQA) is operated at ZAMG in cooperation with the University of Natural Resources and Life Sciences in Vienna (BOKU) by order of the regional governments since 2005. AQA conducts daily forecasts of gaseous and particulate (PM10) air pollutants over Austria. In the frame of the project AQA-PM (funded by FFG), satellite measurements of the Aerosol Optical Thickness (AOT) and ground-based PM10-measurements are combined to highly-resolved initial fields using regression- and assimilation techniques. For the model simulations WRF/Chem is used with a resolution of 3 km over the alpine region. Interfaces have been developed to account for the different measurements as input data. The available local emission inventories provided by the different Austrian regional governments were harmonized and used for the model simulations. An episode in February 2010 is chosen for the model evaluation. During that month exceedances of PM10-thresholds occurred at many measurement stations of the Austrian network. Different model runs (only model/only ground stations assimilated/satellite and ground stations assimilated) are compared to the respective measurements. The goal of this project is to improve the PM10-forecasts for Austria with the integration of satellite based measurements and to provide a comprehensive product-platform.
Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.
Borbély, Bence J; Szolgay, Péter
2017-01-17
Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
Modeling reliability measurement of interface on information system: Towards the forensic of rules
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan
2018-02-01
Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.
Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D
2013-07-01
To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.
Operational Space Weather Models: Trials, Tribulations and Rewards
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Thompson, D. C.; Zhu, L.
2009-12-01
There are many empirical, physics-based, and data assimilation models that can probably be used for space weather applications and the models cover the entire domain from the surface of the Sun to the Earth’s surface. At Utah State University we developed two physics-based data assimilation models of the terrestrial ionosphere as part of a program called Global Assimilation of Ionospheric Measurements (GAIM). One of the data assimilation models is now in operational use at the Air Force Weather Agency (AFWA) in Omaha, Nebraska. This model is a Gauss-Markov Kalman Filter (GAIM-GM) model, and it uses a physics-based model of the ionosphere and a Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) measurements. The physics-based model is the Ionosphere Forecast Model (IFM), which is global and covers the E-region, F-region, and topside ionosphere from 90 to 1400 km. It takes account of five ion species (NO+, O2+, N2+, O+, H+), but the main output of the model is a 3-dimensional electron density distribution at user specified times. The second data assimilation model uses a physics-based Ionosphere-Plasmasphere Model (IPM) and an ensemble Kalman filter technique as a basis for assimilating a diverse set of real-time (or near real-time) measurements. This Full Physics model (GAIM-FP) is global, covers the altitude range from 90 to 30,000 km, includes six ions (NO+, O2+, N2+, O+, H+, He+), and calculates the self-consistent ionospheric drivers (electric fields and neutral winds). The GAIM-FP model is scheduled for delivery in 2012. Both of these GAIM models assimilate bottom-side Ne profiles from a variable number of ionosondes, slant TEC from a variable number of ground GPS/TEC stations, in situ Ne from four DMSP satellites, line-of-sight UV emissions measured by satellites, and occultation data. Quality control algorithms for all of the data types are provided as an integral part of the GAIM models and these models take account of latent data (up to 3 hours). The trials, tribulations and rewards of constructing and maintaining operational data assimilation models will be discussed.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Diffraction-based overlay measurement on dedicated mark using rigorous modeling method
NASA Astrophysics Data System (ADS)
Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang
2012-03-01
Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.
Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.
2015-01-01
Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858
Oliveira-Maia, Albino J; Mendonça, Carina; Pessoa, Maria J; Camacho, Marta; Gago, Joaquim
2016-01-01
Within clinical psychiatry, recovery from severe mental illness (SMI) has classically been defined according to symptoms and function (service-based recovery). However, service-users have argued that recovery should be defined as the process of overcoming mental illness, regaining self-control and establishing a meaningful life (customer-based recovery). Here, we aimed to compare customer-based and service-based recovery and clarify their differential relationship with other constructs, namely needs and quality of life. The study was conducted in 101 patients suffering from SMI, recruited from a rural community mental health setting in Portugal. Customer-based recovery and function-related service-based recovery were assessed, respectively, using a shortened version of the Mental Health Recovery Measure (MHRM-20) and the Global Assessment of Functioning score. The Camberwell Assessment of Need scale was used to objectively assess needs, while subjective quality of life was measured with the TL-30s scale. Using multiple linear regression models, we found that the Global Assessment of Functioning score was incrementally predictive of the MHRM-20 score, when added to a model including only clinical and demographic factors, and that this model was further incremented by the score for quality of life. However, in an alternate model using the Global Assessment of Functioning score as the dependent variable, while the MHRM-20 score contributed significantly to the model when added to clinical and demographic factors, the model was not incremented by the score for quality of life. These results suggest that, while a more global concept of recovery from SMI may be assessed using measures for service-based and customer-based recovery, the latter, namely the MHRM-20, also provides information about subjective well-being. Pending confirmation of these findings in other populations, this instrument could thus be useful for comprehensive assessment of recovery and subjective well-being in patients suffering from SMI.
Oliveira-Maia, Albino J.; Mendonça, Carina; Pessoa, Maria J.; Camacho, Marta; Gago, Joaquim
2016-01-01
Within clinical psychiatry, recovery from severe mental illness (SMI) has classically been defined according to symptoms and function (service-based recovery). However, service-users have argued that recovery should be defined as the process of overcoming mental illness, regaining self-control and establishing a meaningful life (customer-based recovery). Here, we aimed to compare customer-based and service-based recovery and clarify their differential relationship with other constructs, namely needs and quality of life. The study was conducted in 101 patients suffering from SMI, recruited from a rural community mental health setting in Portugal. Customer-based recovery and function-related service-based recovery were assessed, respectively, using a shortened version of the Mental Health Recovery Measure (MHRM-20) and the Global Assessment of Functioning score. The Camberwell Assessment of Need scale was used to objectively assess needs, while subjective quality of life was measured with the TL-30s scale. Using multiple linear regression models, we found that the Global Assessment of Functioning score was incrementally predictive of the MHRM-20 score, when added to a model including only clinical and demographic factors, and that this model was further incremented by the score for quality of life. However, in an alternate model using the Global Assessment of Functioning score as the dependent variable, while the MHRM-20 score contributed significantly to the model when added to clinical and demographic factors, the model was not incremented by the score for quality of life. These results suggest that, while a more global concept of recovery from SMI may be assessed using measures for service-based and customer-based recovery, the latter, namely the MHRM-20, also provides information about subjective well-being. Pending confirmation of these findings in other populations, this instrument could thus be useful for comprehensive assessment of recovery and subjective well-being in patients suffering from SMI. PMID:27857698
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addair, Travis; Barno, Justin; Dodge, Doug
CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.
A method for modelling GP practice level deprivation scores using GIS
Strong, Mark; Maheswaran, Ravi; Pearson, Tim; Fryers, Paul
2007-01-01
Background A measure of general practice level socioeconomic deprivation can be used to explore the association between deprivation and other practice characteristics. An area-based categorisation is commonly chosen as the basis for such a deprivation measure. Ideally a practice population-weighted area-based deprivation score would be calculated using individual level spatially referenced data. However, these data are often unavailable. One approach is to link the practice postcode to an area-based deprivation score, but this method has limitations. This study aimed to develop a Geographical Information Systems (GIS) based model that could better predict a practice population-weighted deprivation score in the absence of patient level data than simple practice postcode linkage. Results We calculated predicted practice level Index of Multiple Deprivation (IMD) 2004 deprivation scores using two methods that did not require patient level data. Firstly we linked the practice postcode to an IMD 2004 score, and secondly we used a GIS model derived using data from Rotherham, UK. We compared our two sets of predicted scores to "gold standard" practice population-weighted scores for practices in Doncaster, Havering and Warrington. Overall, the practice postcode linkage method overestimated "gold standard" IMD scores by 2.54 points (95% CI 0.94, 4.14), whereas our modelling method showed no such bias (mean difference 0.36, 95% CI -0.30, 1.02). The postcode-linked method systematically underestimated the gold standard score in less deprived areas, and overestimated it in more deprived areas. Our modelling method showed a small underestimation in scores at higher levels of deprivation in Havering, but showed no bias in Doncaster or Warrington. The postcode-linked method showed more variability when predicting scores than did the GIS modelling method. Conclusion A GIS based model can be used to predict a practice population-weighted area-based deprivation measure in the absence of patient level data. Our modelled measure generally had better agreement with the population-weighted measure than did a postcode-linked measure. Our model may also avoid an underestimation of IMD scores in less deprived areas, and overestimation of scores in more deprived areas, seen when using postcode linked scores. The proposed method may be of use to researchers who do not have access to patient level spatially referenced data. PMID:17822545
NASA Astrophysics Data System (ADS)
Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew
2017-11-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Physiome-model-based state-space framework for cardiac deformation recovery.
Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng
2007-11-01
To more reliably recover cardiac information from noise-corrupted, patient-specific measurements, it is essential to employ meaningful constraining models and adopt appropriate optimization criteria to couple the models with the measurements. Although biomechanical models have been extensively used for myocardial motion recovery with encouraging results, the passive nature of such constraints limits their ability to fully count for the deformation caused by active forces of the myocytes. To overcome such limitations, we propose to adopt a cardiac physiome model as the prior constraint for cardiac motion analysis. The cardiac physiome model comprises an electric wave propagation model, an electromechanical coupling model, and a biomechanical model, which are connected through a cardiac system dynamics for a more complete description of the macroscopic cardiac physiology. Embedded within a multiframe state-space framework, the uncertainties of the model and the patient's measurements are systematically dealt with to arrive at optimal cardiac kinematic estimates and possibly beyond. Experiments have been conducted to compare our proposed cardiac-physiome-model-based framework with the solely biomechanical model-based framework. The results show that our proposed framework recovers more accurate cardiac deformation from synthetic data and obtains more sensible estimates from real magnetic resonance image sequences. With the active components introduced by the cardiac physiome model, cardiac deformations recovered from patient's medical images are more physiologically plausible.
[A new model fo the evaluation of measurements of the neurocranium].
Seidler, H; Wilfing, H; Weber, G; Traindl-Prohazka, M; zur Nedden, D; Platzer, W
1993-12-01
A simple and user-friendly model for trigonometric description of the neurocranium based on newly defined points of measurement is presented. This model not only provides individual description, but also allows for an evaluation of developmental and phylogenetic aspects.
Assessment of initial soil moisture conditions for event-based rainfall-runoff modelling
NASA Astrophysics Data System (ADS)
Tramblay, Yves; Bouvier, Christophe; Martin, Claude; Didon-Lescot, Jean-François; Todorovik, Dragana; Domergue, Jean-Marc
2010-06-01
Flash floods are the most destructive natural hazards that occur in the Mediterranean region. Rainfall-runoff models can be very useful for flash flood forecasting and prediction. Event-based models are very popular for operational purposes, but there is a need to reduce the uncertainties related to the initial moisture conditions estimation prior to a flood event. This paper aims to compare several soil moisture indicators: local Time Domain Reflectometry (TDR) measurements of soil moisture, modelled soil moisture through the Interaction-Sol-Biosphère-Atmosphère (ISBA) component of the SIM model (Météo-France), antecedent precipitation and base flow. A modelling approach based on the Soil Conservation Service-Curve Number method (SCS-CN) is used to simulate the flood events in a small headwater catchment in the Cevennes region (France). The model involves two parameters: one for the runoff production, S, and one for the routing component, K. The S parameter can be interpreted as the maximal water retention capacity, and acts as the initial condition of the model, depending on the antecedent moisture conditions. The model was calibrated from a 20-flood sample, and led to a median Nash value of 0.9. The local TDR measurements in the deepest layers of soil (80-140 cm) were found to be the best predictors for the S parameter. TDR measurements averaged over the whole soil profile, outputs of the SIM model, and the logarithm of base flow also proved to be good predictors, whereas antecedent precipitations were found to be less efficient. The good correlations observed between the TDR predictors and the S calibrated values indicate that monitoring soil moisture could help setting the initial conditions for simplified event-based models in small basins.
Wallace, C.S.A.; Marsh, S.E.
2005-01-01
Our study used geostatistics to extract measures that characterize the spatial structure of vegetated landscapes from satellite imagery for mapping endangered Sonoran pronghorn habitat. Fine spatial resolution IKONOS data provided information at the scale of individual trees or shrubs that permitted analysis of vegetation structure and pattern. We derived images of landscape structure by calculating local estimates of the nugget, sill, and range variogram parameters within 25 ?? 25-m image windows. These variogram parameters, which describe the spatial autocorrelation of the 1-m image pixels, are shown in previous studies to discriminate between different species-specific vegetation associations. We constructed two independent models of pronghorn landscape preference by coupling the derived measures with Sonoran pronghorn sighting data: a distribution-based model and a cluster-based model. The distribution-based model used the descriptive statistics for variogram measures at pronghorn sightings, whereas the cluster-based model used the distribution of pronghorn sightings within clusters of an unsupervised classification of derived images. Both models define similar landscapes, and validation results confirm they effectively predict the locations of an independent set of pronghorn sightings. Such information, although not a substitute for field-based knowledge of the landscape and associated ecological processes, can provide valuable reconnaissance information to guide natural resource management efforts. ?? 2005 Taylor & Francis Group Ltd.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Model-based reasoning in the physics laboratory: Framework and initial results
NASA Astrophysics Data System (ADS)
Zwickl, Benjamin M.; Hu, Dehui; Finkelstein, Noah; Lewandowski, H. J.
2015-12-01
[This paper is part of the Focused Collection on Upper Division Physics Courses.] We review and extend existing frameworks on modeling to develop a new framework that describes model-based reasoning in introductory and upper-division physics laboratories. Constructing and using models are core scientific practices that have gained significant attention within K-12 and higher education. Although modeling is a broadly applicable process, within physics education, it has been preferentially applied to the iterative development of broadly applicable principles (e.g., Newton's laws of motion in introductory mechanics). A significant feature of the new framework is that measurement tools (in addition to the physical system being studied) are subjected to the process of modeling. Think-aloud interviews were used to refine the framework and demonstrate its utility by documenting examples of model-based reasoning in the laboratory. When applied to the think-aloud interviews, the framework captures and differentiates students' model-based reasoning and helps identify areas of future research. The interviews showed how students productively applied similar facets of modeling to the physical system and measurement tools: construction, prediction, interpretation of data, identification of model limitations, and revision. Finally, we document students' challenges in explicitly articulating assumptions when constructing models of experimental systems and further challenges in model construction due to students' insufficient prior conceptual understanding. A modeling perspective reframes many of the seemingly arbitrary technical details of measurement tools and apparatus as an opportunity for authentic and engaging scientific sense making.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
NASA Astrophysics Data System (ADS)
Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.
2017-12-01
Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.
The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study
Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz
2005-01-01
We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
Araki, Tadashi; Kumar, P Krishna; Suri, Harman S; Ikeda, Nobutaka; Gupta, Ajay; Saba, Luca; Rajan, Jeny; Lavra, Francesco; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Suri, Jasjit S
2016-07-01
The degree of stenosis in the carotid artery can be predicted using automated carotid lumen diameter (LD) measured from B-mode ultrasound images. Systolic velocity-based methods for measurement of LD are subjective. With the advancement of high resolution imaging, image-based methods have started to emerge. However, they require robust image analysis for accurate LD measurement. This paper presents two different algorithms for automated segmentation of the lumen borders in carotid ultrasound images. Both algorithms are modeled as a two stage process. Stage one consists of a global-based model using scale-space framework for the extraction of the region of interest. This stage is common to both algorithms. Stage two is modeled using a local-based strategy that extracts the lumen interfaces. At this stage, the algorithm-1 is modeled as a region-based strategy using a classification framework, whereas the algorithm-2 is modeled as a boundary-based approach that uses the level set framework. Two sets of databases (DB), Japan DB (JDB) (202 patients, 404 images) and Hong Kong DB (HKDB) (50 patients, 300 images) were used in this study. Two trained neuroradiologists performed manual LD tracings. The mean automated LD measured was 6.35 ± 0.95 mm for JDB and 6.20 ± 1.35 mm for HKDB. The precision-of-merit was: 97.4 % and 98.0 % w.r.t to two manual tracings for JDB and 99.7 % and 97.9 % w.r.t to two manual tracings for HKDB. Statistical tests such as ANOVA, Chi-Squared, T-test, and Mann-Whitney test were conducted to show the stability and reliability of the automated techniques.
Study of helicopterroll control effectiveness criteria
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Bourne, Simon M.; Curtiss, Howard C., Jr.; Hindson, William S.; Hess, Ronald A.
1986-01-01
A study of helicopter roll control effectiveness based on closed-loop task performance measurement and modeling is presented. Roll control critieria are based on task margin, the excess of vehicle task performance capability over the pilot's task performance demand. Appropriate helicopter roll axis dynamic models are defined for use with analytic models for task performance. Both near-earth and up-and-away large-amplitude maneuvering phases are considered. The results of in-flight and moving-base simulation measurements are presented to support the roll control effectiveness criteria offered. This Volume contains the theoretical analysis, simulation results and criteria development.
Wang, Mian; Chen, Ronald C; Usinger, Deborah S; Reeve, Bryce B
2017-11-01
To evaluate measurement invariance (phone interview vs computer self-administered survey) of 15 PROMIS measures responded by a population-based cohort of localized prostate cancer survivors. Participants were part of the North Carolina Prostate Cancer Comparative Effectiveness and Survivorship Study. Out of the 952 men who took the phone interview at 24 months post-treatment, 401 of them also completed the same survey online using a home computer. Unidimensionality of the PROMIS measures was examined using single-factor confirmatory factor analysis (CFA) models. Measurement invariance testing was conducted using longitudinal CFA via a model comparison approach. For strongly or partially strongly invariant measures, changes in the latent factors and factor autocorrelations were also estimated and tested. Six measures (sleep disturbance, sleep-related impairment, diarrhea, illness impact-negative, illness impact-positive, and global satisfaction with sex life) had locally dependent items, and therefore model modifications had to be made on these domains prior to measurement invariance testing. Overall, seven measures achieved strong invariance (all items had equal loadings and thresholds), and four measures achieved partial strong invariance (each measure had one item with unequal loadings and thresholds). Three measures (pain interference, interest in sexual activity, and global satisfaction with sex life) failed to establish configural invariance due to between-mode differences in factor patterns. This study supports the use of phone-based live interviewers in lieu of PC-based assessment (when needed) for many of the PROMIS measures.
Linking Quality and Spending to Measure Value for People with Serious Illness.
Ryan, Andrew M; Rodgers, Phillip E
2018-03-01
Healthcare payment is rapidly evolving to reward value by measuring and paying for quality and spending performance. Rewarding value for the care of seriously ill patients presents unique challenges. To evaluate the state of current efforts to measure and reward value for the care of seriously ill patients. We performed a PubMed search of articles related to (1) measures of spending for people with serious illness and (2) linking spending and quality measures and rewarding performance for the care of people with serious illness. We limited our search to U.S.-based studies published in English between January 1, 1960, and March 31, 2017. We supplemented this search by identifying public programs and other known initiatives that linked quality and spending for the seriously ill and extracted key program elements. Our search related to linking spending and quality measures and rewarding performance for the care of people with serious illness yielded 277 articles. We identified three current public programs that currently link measures of quality and spending-or are likely to within the next few years-the Oncology Care Model; the Comprehensive End-Stage Renal Disease Model; and Home Health Value-Based Purchasing. Models that link quality and spending consist of four core components: (1) measuring quality, (2) measuring spending, (3) the payment adjustment model, and (4) the linking/incentive model. We found that current efforts to reward value for seriously ill patients are targeted for specific patient populations, do not broadly encourage the use of palliative care, and have not closely aligned quality and spending measures related to palliative care. We develop recommendations for policymakers and stakeholders about how measures of spending and quality can be balanced in value-based payment programs.
Linking Quality and Spending to Measure Value for People with Serious Illness
Rodgers, Phillip E.
2018-01-01
Abstract Background: Healthcare payment is rapidly evolving to reward value by measuring and paying for quality and spending performance. Rewarding value for the care of seriously ill patients presents unique challenges. Objective: To evaluate the state of current efforts to measure and reward value for the care of seriously ill patients. Design: We performed a PubMed search of articles related to (1) measures of spending for people with serious illness and (2) linking spending and quality measures and rewarding performance for the care of people with serious illness. We limited our search to U.S.-based studies published in English between January 1, 1960, and March 31, 2017. We supplemented this search by identifying public programs and other known initiatives that linked quality and spending for the seriously ill and extracted key program elements. Results: Our search related to linking spending and quality measures and rewarding performance for the care of people with serious illness yielded 277 articles. We identified three current public programs that currently link measures of quality and spending—or are likely to within the next few years—the Oncology Care Model; the Comprehensive End-Stage Renal Disease Model; and Home Health Value-Based Purchasing. Models that link quality and spending consist of four core components: (1) measuring quality, (2) measuring spending, (3) the payment adjustment model, and (4) the linking/incentive model. We found that current efforts to reward value for seriously ill patients are targeted for specific patient populations, do not broadly encourage the use of palliative care, and have not closely aligned quality and spending measures related to palliative care. Conclusions: We develop recommendations for policymakers and stakeholders about how measures of spending and quality can be balanced in value-based payment programs. PMID:29091529
Measurement-based auralization methodology for the assessment of noise mitigation measures
NASA Astrophysics Data System (ADS)
Thomas, Pieter; Wei, Weigang; Van Renterghem, Timothy; Botteldooren, Dick
2016-09-01
The effect of noise mitigation measures is generally expressed by noise levels only, neglecting the listener's perception. In this study, an auralization methodology is proposed that enables an auditive preview of noise abatement measures for road traffic noise, based on the direction dependent attenuation of a priori recordings made with a dedicated 32-channel spherical microphone array. This measurement-based auralization has the advantage that all non-road traffic sounds that create the listening context are present. The potential of this auralization methodology is evaluated through the assessment of the effect of an L-shaped mound. The angular insertion loss of the mound is estimated by using the ISO 9613-2 propagation model, the Pierce barrier diffraction model and the Harmonoise point-to-point model. The realism of the auralization technique is evaluated by listening tests, indicating that listeners had great difficulty in differentiating between a posteriori recordings and auralized samples, which shows the validity of the followed approaches.
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
What Is Measured in Mathematics Tests? Construct Validity of Curriculum-Based Mathematics Measures.
ERIC Educational Resources Information Center
Thurber, Robin Schul; Shinn, Mark R.; Smolkowski, Keith
2002-01-01
Mathematics curriculum-based measurement (M-CBM) is one tool that has been developed for formative evaluation in mathematics. This study examines what constructs M-CBM actually measures in the context of a range of other mathematics measures. Results indicated that a two-factor model of mathematics where Computation and Applications were distinct…
Experimental Evaluation of Equivalent-Fluid Models for Melamine Foam
NASA Technical Reports Server (NTRS)
Allen, Albert R.; Schiller, Noah H.
2016-01-01
Melamine foam is a soft porous material commonly used in noise control applications. Many models exist to represent porous materials at various levels of fidelity. This work focuses on rigid frame equivalent fluid models, which represent the foam as a fluid with a complex speed of sound and density. There are several empirical models available to determine these frequency dependent parameters based on an estimate of the material flow resistivity. Alternatively, these properties can be experimentally educed using an impedance tube setup. Since vibroacoustic models are generally sensitive to these properties, this paper assesses the accuracy of several empirical models relative to impedance tube measurements collected with melamine foam samples. Diffuse field sound absorption measurements collected using large test articles in a laboratory are also compared with absorption predictions determined using model-based and measured foam properties. Melamine foam slabs of various thicknesses are considered.
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...
2012-05-29
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
Towards an Integrated Model of the NIC Layered Implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O S; Callahan, D A; Cerjan, C J
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-45% of the calculated yields.« less
Development of an irrigation scheduling software based on model predicted crop water stress
USDA-ARS?s Scientific Manuscript database
Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...
Application of an IRT Polytomous Model for Measuring Health Related Quality of Life
ERIC Educational Resources Information Center
Tejada, Antonio J. Rojas; Rojas, Oscar M. Lozano
2005-01-01
Background: The Item Response Theory (IRT) has advantages for measuring Health Related Quality of Life (HRQOL) as opposed to the Classical Tests Theory (CTT). Objectives: To present the results of the application of a polytomous model based on IRT, specifically, the Rating Scale Model (RSM), to measure HRQOL with the EORTC QLQ-C30. Methods: 103…
Diagnostic, pharmacy-based, and self-reported health measures in risk equalization models.
Stam, Pieter J A; van Vliet, René C J A; van de Ven, Wynand P M M
2010-05-01
Current research on the added value of self-reported health measures for risk equalization modeling does not include all types of self-reported health measures; and/or is compared with a limited set of medically diagnosed or pharmacy-based diseases; and/or is limited to specific populations of high-risk individuals. The objective of our study is to determine the predictive power of all types of self-reported health measures for prospective modeling of health care expenditures in a general population of adult Dutch sickness fund enrollees, given that pharmacy and diagnostic data from administrative records are already included in the risk equalization formula. We used 4 models of 2002 total, inpatient and outpatient expenditures to evaluate the separate and combined predictive ability of 2 kinds of data: (1) Pharmacy-based (PCGs) and Diagnosis-based (DCGs) Cost Groups and (2) summarized self-reported health information. Model performance is measured at the total population level using R2 and mean absolute prediction error; also, by examining mean discrepancies between model-predicted and actual expenditures (ie, expected over- or undercompensation) for members of potentially "mispriced" subgroups. These subgroups are identified by self-reports from prior-year health surveys and utilization and expenditure data from 5 preceding years. Subjects were 18,617 respondents to a health survey, held among a stratified sample of adult members of the largest Dutch sickness fund in 2002, with an overrepresentation of people in poor health. The data were extracted from a claims database and a health survey. The claims-based data are the outcomes of total, inpatient, and outpatient annualized expenditures in 2002; age, gender, PCGs, DCGs in 2001; and health care expenditures and hospitalizations during the years 1997 to 2001. The SF-36, Organization for Economic Cooperation and Development items, and long-term diseases and conditions were collected by a special purpose health survey conducted in the last quarter of 2001. Out-of-sample R2 equals 17.2%, 2.6%, and 32.4% for the models of total, inpatient and outpatient expenditures including PCGs, DCGs, and self-reported health measures. Self-reported health measures contribute less to predictive power than PCGs and DCGs. PCGs and DCGs also predict better than self-reported health measures for people with top 25% total expenditures or hospitalizations in each year during a 5-year period. On the other hand, self-reported health measures are better predictors than PCGs and DCGs for people without any top 25% expenditures during the 5-year period, for switchers, and for most subgroups of relatively unhealthy people defined by self-reported health measures. Among the set of self-reported health measures, the SF-36 adds most to predictive power in terms of R2, mean absolute prediction error, and for almost all studied subgroups. It is concluded that the self-reported health measures make an independent contribution to forecasting health care expenditures, even if the prediction model already includes diagnostic and pharmacy-based information currently used in Dutch risk equalization models.
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
Physics-Based Modeling and Measurement of High-Flux Condensation Heat Transfer
2011-09-01
TRANSFER (Contract No. N000140811139) by Prof. Issam Mudawar Sung-Min Kim Joseph Kim Boiling and Two-Phase Flow Laboratory School of...Final 01-10-2008 to 30-09-2011 Physics-Based Modeling and Measurement of High-Flux Condensation Heat Transfer NA N00014-08-1-1139 NA NA NA NA Mudawar ...respectively. phase change, condensation, electronics cooling, micro-channel, high-flux U U U UU 107 Mudawar , Issam 765-494-5705 Reset PHYSICS-BASED
ERIC Educational Resources Information Center
Stockdale, Susan L.; Brockett, Ralph G.
2011-01-01
The purpose of this study was to develop a reliable and valid instrument to measure self-directedness in learning among college students based on an operationalization of the personal responsibility orientation (PRO) model of self-direction in learning. The resultant 25-item Personal Responsibility Orientation to Self-Direction in Learning Scale…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J V; Chambers, D H; Breitfeller, E F
2010-03-02
The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
Conceptual astronomy: A novel model for teaching postsecondary science courses
NASA Astrophysics Data System (ADS)
Zeilik, Michael; Schau, Candace; Mattern, Nancy; Hall, Shannon; Teague, Kathleen W.; Bisard, Walter
1997-10-01
An innovative, conceptually based instructional model for teaching large undergraduate astronomy courses was designed, implemented, and evaluated in the Fall 1995 semester. This model was based on cognitive and educational theories of knowledge and, we believe, is applicable to other large postsecondary science courses. Major components were: (a) identification of the basic important concepts and their interrelationships that are necessary for connected understanding of astronomy in novice students; (b) use of these concepts and their interrelationships throughout the design, implementation, and evaluation stages of the model; (c) identification of students' prior knowledge and misconceptions; and (d) implementation of varied instructional strategies targeted toward encouraging conceptual understanding in students (i.e., instructional concept maps, cooperative small group work, homework assignments stressing concept application, and a conceptually based student assessment system). Evaluation included the development and use of three measures of conceptual understanding and one of attitudes toward studying astronomy. Over the semester, students showed very large increases in their understanding as assessed by a conceptually based multiple-choice measure of misconceptions, a select-and-fill-in concept map measure, and a relatedness-ratings measure. Attitudes, which were slightly positive before the course, changed slightly in a less favorable direction.
Moving Model Test of High-Speed Train Aerodynamic Drag Based on Stagnation Pressure Measurements
Yang, Mingzhi; Du, Juntao; Huang, Sha; Zhou, Dan
2017-01-01
A moving model test method based on stagnation pressure measurements is proposed to measure the train aerodynamic drag coefficient. Because the front tip of a high-speed train has a high pressure area and because a stagnation point occurs in the center of this region, the pressure of the stagnation point is equal to the dynamic pressure of the sensor tube based on the obtained train velocity. The first derivation of the train velocity is taken to calculate the acceleration of the train model ejected by the moving model system without additional power. According to Newton’s second law, the aerodynamic drag coefficient can be resolved through many tests at different train speeds selected within a relatively narrow range. Comparisons are conducted with wind tunnel tests and numerical simulations, and good agreement is obtained, with differences of less than 6.1%. Therefore, the moving model test method proposed in this paper is feasible and reliable. PMID:28095441
Lee, Karl K.; Risley, John C.
2002-03-19
Precipitation-runoff models, base-flow-separation techniques, and stream gain-loss measurements were used to study recharge and ground-water surface-water interaction as part of a study of the ground-water resources of the Willamette River Basin. The study was a cooperative effort between the U.S. Geological Survey and the State of Oregon Water Resources Department. Precipitation-runoff models were used to estimate the water budget of 216 subbasins in the Willamette River Basin. The models were also used to compute long-term average recharge and base flow. Recharge and base-flow estimates will be used as input to a regional ground-water flow model, within the same study. Recharge and base-flow estimates were made using daily streamflow records. Recharge estimates were made at 16 streamflow-gaging-station locations and were compared to recharge estimates from the precipitation-runoff models. Base-flow separation methods were used to identify the base-flow component of streamflow at 52 currently operated and discontinued streamflow-gaging-station locations. Stream gain-loss measurements were made on the Middle Fork Willamette, Willamette, South Yamhill, Pudding, and South Santiam Rivers, and were used to identify and quantify gaining and losing stream reaches both spatially and temporally. These measurements provide further understanding of ground-water/surface-water interactions.
Displacement monitoring and modelling of a high-speed railway bridge using C-band Sentinel-1 data
NASA Astrophysics Data System (ADS)
Huang, Qihuan; Crosetto, Michele; Monserrat, Oriol; Crippa, Bruno
2017-06-01
Bridge displacement monitoring is one of the key components of bridge structural health monitoring. Traditional methods, usually based on limited sets of sensors mounted on a given bridge, collect point-like deformation information and have the disadvantage of providing incomplete displacement information. In this paper, a Persistent Scatterer Interferometry (PSI) approach is used to monitor the displacements of the Nanjing Dashengguan Yangtze River high-speed railway bridge. Twenty-nine (29) European Space Agency Sentinel-1A images, acquired from April 25, 2015 to August 5, 2016, were used in the PSI analysis. A total of 1828 measurement points were selected on the bridge. The results show a maximum longitudinal displacement of about 150 mm on each side of the bridge. The measured displacements showed a strong correlation with the environmental temperature at the time the images used were acquired, indicating that they were due to thermal expansion of the bridge. At each pier, a regression model based on the PSI-measured displacements was compared with a model based on in-situ measurements. The good agreement of these models demonstrates the capability of the PSI technique to monitor long-span railway bridge displacements. By comparing the modelled displacements and dozens of PSI measurements, we show how the performance of movable bearings can be evaluated. The high density of the PSI measurement points is advantageous for the health monitoring of the entire bridge.
Borozan, Ivan; Watt, Stuart; Ferretti, Vincent
2015-05-01
Alignment-based sequence similarity searches, while accurate for some type of sequences, can produce incorrect results when used on more divergent but functionally related sequences that have undergone the sequence rearrangements observed in many bacterial and viral genomes. Here, we propose a classification model that exploits the complementary nature of alignment-based and alignment-free similarity measures with the aim to improve the accuracy with which DNA and protein sequences are characterized. Our model classifies sequences using a combined sequence similarity score calculated by adaptively weighting the contribution of different sequence similarity measures. Weights are determined independently for each sequence in the test set and reflect the discriminatory ability of individual similarity measures in the training set. Because the similarity between some sequences is determined more accurately with one type of measure rather than another, our classifier allows different sets of weights to be associated with different sequences. Using five different similarity measures, we show that our model significantly improves the classification accuracy over the current composition- and alignment-based models, when predicting the taxonomic lineage for both short viral sequence fragments and complete viral sequences. We also show that our model can be used effectively for the classification of reads from a real metagenome dataset as well as protein sequences. All the datasets and the code used in this study are freely available at https://collaborators.oicr.on.ca/vferretti/borozan_csss/csss.html. ivan.borozan@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Borozan, Ivan; Watt, Stuart; Ferretti, Vincent
2015-01-01
Motivation: Alignment-based sequence similarity searches, while accurate for some type of sequences, can produce incorrect results when used on more divergent but functionally related sequences that have undergone the sequence rearrangements observed in many bacterial and viral genomes. Here, we propose a classification model that exploits the complementary nature of alignment-based and alignment-free similarity measures with the aim to improve the accuracy with which DNA and protein sequences are characterized. Results: Our model classifies sequences using a combined sequence similarity score calculated by adaptively weighting the contribution of different sequence similarity measures. Weights are determined independently for each sequence in the test set and reflect the discriminatory ability of individual similarity measures in the training set. Because the similarity between some sequences is determined more accurately with one type of measure rather than another, our classifier allows different sets of weights to be associated with different sequences. Using five different similarity measures, we show that our model significantly improves the classification accuracy over the current composition- and alignment-based models, when predicting the taxonomic lineage for both short viral sequence fragments and complete viral sequences. We also show that our model can be used effectively for the classification of reads from a real metagenome dataset as well as protein sequences. Availability and implementation: All the datasets and the code used in this study are freely available at https://collaborators.oicr.on.ca/vferretti/borozan_csss/csss.html. Contact: ivan.borozan@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25573913
A modeling approach for aerosol optical depth analysis during forest fire events
NASA Astrophysics Data System (ADS)
Aube, Martin P.; O'Neill, Normand T.; Royer, Alain; Lavoue, David
2004-10-01
Measurements of aerosol optical depth (AOD) are important indicators of aerosol particle behavior. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as DDV (Dense Dark Vegetation) based inversion algorithms which yield AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new assimilation methodology that links AOD measurements and the predictions of a particulate matter Transport Model. This modelling package (AODSEM V2.0 for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution may be tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important and robust parameter. We applied this methodology to a significant smoke event that occurred over the eastern part of North America in July 2002.
NASA Astrophysics Data System (ADS)
Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel
2016-11-01
This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.
Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System
NASA Technical Reports Server (NTRS)
Karlgaard, Chris; Schoenenberger, Mark
2017-01-01
This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.
Liu, Xinjie; Liu, Liangyun; Hu, Jiaochan; Du, Shanshan
2017-01-01
The measurement of solar-induced chlorophyll fluorescence (SIF) is a new tool for estimating gross primary production (GPP). Continuous tower-based spectral observations together with flux measurements are an efficient way of linking the SIF to the GPP. Compared to conical observations, hemispherical observations made with cosine-corrected foreoptic have a much larger field of view and can better match the footprint of the tower-based flux measurements. However, estimating the equivalent radiation transfer path length (ERTPL) for hemispherical observations is more complex than for conical observations and this is a key problem that needs to be addressed before accurate retrieval of SIF can be made. In this paper, we first modeled the footprint of hemispherical spectral measurements and found that, under convective conditions with light winds, 90% of the total radiation came from an FOV of width 72°, which in turn covered 75.68% of the source area of the flux measurements. In contrast, conical spectral observations covered only 1.93% of the flux footprint. Secondly, using theoretical considerations, we modeled the ERTPL of the hemispherical spectral observations made with cosine-corrected foreoptic and found that the ERTPL was approximately equal to twice the sensor height above the canopy. Finally, the modeled ERTPL was evaluated using a simulated dataset. The ERTPL calculated using the simulated data was about 1.89 times the sensor’s height above the target surface, which was quite close to the results for the modeled ERTPL. Furthermore, the SIF retrieved from atmospherically corrected spectra using the modeled ERTPL fitted well with the reference values, giving a relative root mean square error of 18.22%. These results show that the modeled ERTPL was reasonable and that this method is applicable to tower-based hemispherical observations of SIF. PMID:28509843
Fricke, Moritz B; Rolfes, Raimund
2015-03-01
An approach for the prediction of underwater noise caused by impact pile driving is described and validated based on in situ measurements. The model is divided into three sub-models. The first sub-model, based on the finite element method, is used to describe the vibration of the pile and the resulting acoustic radiation into the surrounding water and soil column. The mechanical excitation of the pile by the piling hammer is estimated by the second sub-model using an analytical approach which takes the large vertical dimension of the ram into account. The third sub-model is based on the split-step Padé solution of the parabolic equation and targets the long-range propagation up to 20 km. In order to presume realistic environmental properties for the validation, a geoacoustic model is derived from spatially averaged geological information about the investigation area. Although it can be concluded from the validation that the model and the underlying assumptions are appropriate, there are some deviations between modeled and measured results. Possible explanations for the observed errors are discussed.
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
Normal and hemiparetic walking
NASA Astrophysics Data System (ADS)
Pfeiffer, Friedrich; König, Eberhard
2013-01-01
The idea of a model-based control of rehabilitation for hemiparetic patients requires efficient models of human walking, healthy walking as well as hemiparetic walking. Such models are presented in this paper. They include 42 degrees of freedom and allow especially the evaluation of kinetic magnitudes with the goal to evaluate measures for the hardness of hemiparesis. As far as feasible, the simulations have been compared successfully with measurements, thus improving the confidence level for an application in clinical practice. The paper is mainly based on the dissertation [19].
León-Roque, Noemí; Abderrahim, Mohamed; Nuñez-Alejos, Luis; Arribas, Silvia M; Condezo-Hoyos, Luis
2016-12-01
Several procedures are currently used to assess fermentation index (FI) of cocoa beans (Theobroma cacao L.) for quality control. However, all of them present several drawbacks. The aim of the present work was to develop and validate a simple image based quantitative procedure, using color measurement and artificial neural network (ANNs). ANN models based on color measurements were tested to predict fermentation index (FI) of fermented cocoa beans. The RGB values were measured from surface and center region of fermented beans in images obtained by camera and desktop scanner. The FI was defined as the ratio of total free amino acids in fermented versus non-fermented samples. The ANN model that included RGB color measurement of fermented cocoa surface and R/G ratio in cocoa bean of alkaline extracts was able to predict FI with no statistical difference compared with the experimental values. Performance of the ANN model was evaluated by the coefficient of determination, Bland-Altman plot and Passing-Bablok regression analyses. Moreover, in fermented beans, total sugar content and titratable acidity showed a similar pattern to the total free amino acid predicted through the color based ANN model. The results of the present work demonstrate that the proposed ANN model can be adopted as a low-cost and in situ procedure to predict FI in fermented cocoa beans through apps developed for mobile device. Copyright © 2016 Elsevier B.V. All rights reserved.
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
NASA Astrophysics Data System (ADS)
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
Data Assimilation Into Physics-Based Models Via Kalman Filters
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Scherliess, L.; Sojka, J. J.
2002-12-01
The magnetosphere-ionosphere-thermosphere (M-I-T) system is a highly dynamic, coupled, and nonlinear system that can vary significantly from hour to hour at any location. The coupling is particularly strong during geomagnetic storms and substorms, but there are appreciable time delays associated with the transfer of mass, momentum, and energy between the domains. Therefore, both global physics-based models and vast observational data sets are needed to elucidate the dynamics, energetics, and coupling in the M-I-T system. Fortunately, during the coming decade, tens of millions of measurements of the global M-I-T system could become available from a variety of in situ and remote sensing instruments. Some of the measurements will provide direct information about the state variables (densities, drift velocities, and temperatures), while others will provide indirect information, such as optical emissions and magnetic perturbations. The data sources available could include: thousands of ground-based GPS Total Electron Content (TEC) receivers; a world-wide network of ionosondes; hundreds of magnetometers both on the ground and in space; occultations from the COSMIC Satellites, numerous ground-based tomography chains; auroral images from the POLAR Satellite; images of the magnetosphere and plasmasphere from the IMAGE Satellite; SuperDARN radar measurements in the polar regions; the Living With a Star (LWS) Solar Dynamics Observatory and the LWS Radiation Belt and Ionosphere-Thermosphere Storm Probes; and the world-wide network of incoherent scatter radars. To optimize the scientific return and to provide specifications and forecasts for societal applications, the global models and data must be combined in an optimum way. A powerful way of assimilating multiple data types into a time-dependent, physics-based, numerical model is via a Kalman filter. The basic principle of this approach is to combine measurements from multiple instrument types with the information obtained from a physics-based model, taking into account the uncertainties in both the model and measurements. The advantages of this technique and the data sources that might be available will be discussed.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
NASA Astrophysics Data System (ADS)
Li, Z.; Hudson, M. K.; Chen, Y.
2013-12-01
The outer boundary energetic electron flux is used as a driver in radial diffusion calculations, and its precise determination is critical to the solution. A new model was proposed recently based on THEMIS measurements to express the boundary flux as three fit functions of solar wind parameters in a response window, that depend on energy and which solar parameter is used: speed, density, or both (Shin and Lee, 2013). The Dartmouth radial diffusion model has been run using LANL geosynchronous satellite measurements as the outer boundary for a one-month interval in July to August 2004 and the calculated phase space density (PSD) is compared with GPS measurements at the GPS orbit (L=4.16), at magnetic equatorial plane crossings, as a test of the model. We also used the outer boundary generated from the Shin and Lee model and examined this boundary condition by computing the error relative to the simulation using a LANL geosynchronous spacecraft data-driven outer boundary. The calculation shows that there is overestimation and underestimation at different times, however the new boundary condition can be used to drive the radial diffusion model generally, producing the phase space density increase and dropout during a storm with a relatively small error. Having this new method based on a solar wind parametrized data set, we can run the radial diffusion model for storms when particle measurements are not available at the outer boundary. We chose the Whole Heliosphere Interval (WHI) as an example and compared the result with MHD/test-particle simulations (Hudson et al., 2012), obtaining much better agreement with PSD based on GPS measurements at L=4.16 using the diffusion model, which incorporates atmospheric losses.
Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.
Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D
2011-05-01
Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.
Magnetic space-based field measurements
NASA Technical Reports Server (NTRS)
Langel, R. A.
1981-01-01
Satellite measurements of the geomagnetic field began with the launch of Sputnik 3 in May 1958 and have continued sporadically in the intervening years. A list of spacecraft that have made significant contributions to an understanding of the near-earth geomagnetic field is presented. A new era in near-earth magnetic field measurements began with NASA's launch of Magsat in October 1979. Attention is given to geomagnetic field modeling, crustal magnetic anomaly studies, and investigations of the inner earth. It is concluded that satellite-based magnetic field measurements make global surveys practical for both field modeling and for the mapping of large-scale crustal anomalies. They are the only practical method of accurately modeling the global secular variation. Magsat is providing a significant contribution, both because of the timeliness of the survey and because its vector measurement capability represents an advance in the technology of such measurements.
Full-field 3D shape measurement of specular object having discontinuous surfaces
NASA Astrophysics Data System (ADS)
Zhang, Zonghua; Huang, Shujun; Gao, Nan; Gao, Feng; Jiang, Xiangqian
2017-06-01
This paper presents a novel Phase Measuring Deflectometry (PMD) method to measure specular objects having discontinuous surfaces. A mathematical model is established to directly relate the absolute phase and depth, instead of the phase and gradient. Based on the model, a hardware measuring system has been set up, which consists of a precise translating stage, a projector, a diffuser and a camera. The stage locates the projector and the diffuser together to a known position during measurement. By using the model-based and machine vision methods, system calibration is accomplished to provide the required parameters and conditions. The verification tests are given to evaluate the effectiveness of the developed system. 3D (Three-Dimensional) shapes of a concave mirror and a monolithic multi-mirror array having multiple specular surfaces have been measured. Experimental results show that the proposed method can obtain 3D shape of specular objects having discontinuous surfaces effectively
Rengasamy, Samy; Eimer, Benjamin C
2012-01-01
National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number of particles <100 nm and a count (particle number)-based detector.
Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lillaney, Prasheel, E-mail: Prasheel.Lillaney@ucsf.edu; Caton, Curtis; Martin, Alastair J.
2014-02-15
Purpose: Magnetic resonance imaging (MRI) is an emerging modality for interventional radiology, giving clinicians another tool for minimally invasive image-guided interventional procedures. Difficulties associated with endovascular catheter navigation using MRI guidance led to the development of a magnetically steerable catheter. The focus of this study was to mechanically characterize deflections of two different prototypes of the magnetically steerable catheterin vitro to better understand their efficacy. Methods: A mathematical model for deflection of the magnetically steerable catheter is formulated based on the principle that at equilibrium the mechanical and magnetic torques are equal to each other. Furthermore, two different image basedmore » methods for empirically measuring the catheter deflection angle are presented. The first, referred to as the absolute tip method, measures the angle of the line that is tangential to the catheter tip. The second, referred to the base to tip method, is an approximation that is used when it is not possible to measure the angle of the tangent line. Optical images of the catheter deflection are analyzed using the absolute tip method to quantitatively validate the predicted deflections from the mathematical model. Optical images of the catheter deflection are also analyzed using the base to tip method to quantitatively determine the differences between the absolute tip and base to tip methods. Finally, the optical images are compared to MR images using the base to tip method to determine the accuracy of measuring the catheter deflection using MR. Results: The optical catheter deflection angles measured for both catheter prototypes using the absolute tip method fit very well to the mathematical model (R{sup 2} = 0.91 and 0.86 for each prototype, respectively). It was found that the angles measured using the base to tip method were consistently smaller than those measured using the absolute tip method. The deflection angles measured using optical data did not demonstrate a significant difference from the angles measured using MR image data when compared using the base to tip method. Conclusions: This study validates the theoretical description of the magnetically steerable catheter, while also giving insight into different methods and modalities for measuring the deflection angles of the prototype catheters. These results can be used to mechanically model future iterations of the design. Quantifying the difference between the different methods for measuring catheter deflection will be important when making deflection measurements in future studies. Finally, MR images can be used to reliably measure deflection angles since there is no significant difference between the MR and optical measurements.« less
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, Kuo-Hsing; Meyer, Kristin De; Department of Electrical Engineering, KU Leuven, Leuven
Band-to-band tunneling parameters of strained indirect bandgap materials are not well-known, hampering the reliability of performance predictions of tunneling devices based on these materials. The nonlocal band-to-band tunneling model for compressively strained SiGe is calibrated based on a comparison of strained SiGe p-i-n tunneling diode measurements and doping-profile-based diode simulations. Dopant and Ge profiles of the diodes are determined by secondary ion mass spectrometry and capacitance-voltage measurements. Theoretical parameters of the band-to-band tunneling model are calculated based on strain-dependent properties such as bandgap, phonon energy, deformation-potential-based electron-phonon coupling, and hole effective masses of strained SiGe. The latter is determined withmore » a 6-band k·p model. The calibration indicates an underestimation of the theoretical electron-phonon coupling with nearly an order of magnitude. Prospects of compressively strained SiGe tunneling transistors are made by simulations with the calibrated model.« less
Verifiable fault tolerance in measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Hayashi, Masahito
2017-09-01
Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.
Khodabandeloo, Babak; Melvin, Dyan; Jo, Hongki
2017-01-01
Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well. PMID:29149088
Antioxidant Capacity: Experimental Determination by EPR Spectroscopy and Mathematical Modeling.
Polak, Justyna; Bartoszek, Mariola; Chorążewski, Mirosław
2015-07-22
A new method of determining antioxidant capacity based on a mathematical model is presented in this paper. The model was fitted to 1000 data points of electron paramagnetic resonance (EPR) spectroscopy measurements of various food product samples such as tea, wine, juice, and herbs with Trolox equivalent antioxidant capacity (TEAC) values from 20 to 2000 μmol TE/100 mL. The proposed mathematical equation allows for a determination of TEAC of food products based on a single EPR spectroscopy measurement. The model was tested on the basis of 80 EPR spectroscopy measurements of herbs, tea, coffee, and juice samples. The proposed model works for both strong and weak antioxidants (TEAC values from 21 to 2347 μmol TE/100 mL). The determination coefficient between TEAC values obtained experimentally and TEAC values calculated with proposed mathematical equation was found to be R(2) = 0.98. Therefore, the proposed new method of TEAC determination based on a mathematical model is a good alternative to the standard EPR method due to its being fast, accurate, inexpensive, and simple to perform.
Rahman, Mizanur; Hewitt, Jennifer E; Van-Bussel, Frank; Edwards, Hunter; Blawzdziewicz, Jerzy; Szewczyk, Nathaniel J; Driscoll, Monica; Vanapalli, Siva A
2018-06-12
Muscle strength is a functional measure of quality of life in humans. Declines in muscle strength are manifested in diseases as well as during inactivity, aging, and space travel. With conserved muscle biology, the simple genetic model C. elegans is a high throughput platform in which to identify molecular mechanisms causing muscle strength loss and to develop interventions based on diet, exercise, and drugs. In the clinic, standardized strength measures are essential to quantitate changes in patients; however, analogous standards have not been recapitulated in the C. elegans model since force generation fluctuates based on animal behavior and locomotion. Here, we report a microfluidics-based system for strength measurement that we call 'NemaFlex', based on pillar deflection as the nematode crawls through a forest of pillars. We have optimized the micropillar forest design and identified robust measurement conditions that yield a measure of strength that is independent of behavior and gait. Validation studies using a muscle contracting agent and mutants confirm that NemaFlex can reliably score muscular strength in C. elegans. Additionally, we report a scaling factor to account for animal size that is consistent with a biomechanics model and enables comparative strength studies of mutants. Taken together, our findings anchor NemaFlex for applications in genetic and drug screens, for defining molecular and cellular circuits of neuromuscular function, and for dissection of degenerative processes in disuse, aging, and disease.
NASA Astrophysics Data System (ADS)
Vaidyanathan, A.; Yip, F.
2017-12-01
Context: Studies that have explored the impacts of environmental exposure on human health have mostly relied on data from weather stations, which can be limited in geographic scope. For this assessment, we: (1) evaluated the performance of the meteorological data from the North American Land Data Assimilation System Phase 2 (NLDAS) model with measurements from weather stations for public health and specifically for CDC's Environmental Public Health Tracking Program, and (2) conducted a health assessment to explore the relationship between heat exposure and mortality, and examined region-specific differences in heat-mortality (H-M) relationships when using model-based estimates in place of measurements from weather stations.Methods: Meteorological data from the NLDAS Phase 2 model was evaluated against measurements from weather stations. A time-series analysis was conducted, using both station- and model-based data, to generate H-M relationships for counties in the U.S. The county-specific risk information was pooled to characterize regional relationships for both station- and model-based data, which were then compared to identify degrees of overlap and discrepancies between results generated using the two data sources. Results: NLDAS-based heat metrics were in agreement with those generated using weather station data. In general, the H-M relationship tended to be non-linear and varied by region, particularly the heat index value at which the health risks become positively significant. However, there was a high degree of overlap between region-specific H-M relationships generated from weather stations and the NLDAS model.Interpretation: Heat metrics from NLDAS model are available for all counties in the coterminous U.S. from 1979-2015. These data can facilitate health research and surveillance activities exploring health impacts associated with long-term heat exposures at finer geographic scales.Conclusion: High spatiotemporal coverage of environmental health data is an important attribute in understanding potential public health impacts. With the limited geographic scope of station-based measurements, adopting NLDAS-based modeled estimates in CDC's Tracking Network would provide a more comprehensive understanding of specific meteorological exposures on human health.
Modelling rating curves using remotely sensed LiDAR data
Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.
2012-01-01
Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote locations.
NASA Astrophysics Data System (ADS)
Feng, Yefeng; Wu, Qin; Hu, Jianbing; Xu, Zhichao; Peng, Cheng; Xia, Zexu
2018-03-01
Interface induced polarization has a significant impact on permittivity of 0–3 type polymer composites with Si based semi-conducting fillers. Polarity of Si based filler, polarity of polymer matrix and grain size of filler are closely connected with induced polarization and permittivity of composites. However, unlike 2–2 type composites, the real permittivity of Si based fillers in 0–3 type composites could be not directly measured. Therefore, achieving the theoretical permittivity of fillers in 0–3 composites through effective medium approximation (EMA) models should be very necessary. In this work, the real permittivity results of Si based semi-conducting fillers in ten different 0–3 polymer composite systems were calculated by linear fitting of simplified EMA models, based on particularity of reported parameters in those composites. The results further confirmed the proposed interface induced polarization. The results further verified significant influences of filler polarity, polymer polarity and filler size on induced polarization and permittivity of composites as well. High self-consistency was gained between present modelling and prior measuring. This work might offer a facile and effective route to achieve the difficultly measured dielectric performances of discrete filler phase in some special polymer based composite systems.
Multivariate Non-Symmetric Stochastic Models for Spatial Dependence Models
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.
2017-12-01
A copula based multivariate framework allows more flexibility to describe different kind of dependences than what is possible using models relying on the confining assumption of symmetric Gaussian models: different quantiles can be modelled with a different degree of dependence; it will be demonstrated how this can be expected given process understanding. maximum likelihood based multivariate quantitative parameter estimation yields stable and reliable results; not only improved results in cross-validation based measures of uncertainty are obtained but also a more realistic spatial structure of uncertainty compared to second order models of dependence; as much information as is available is included in the parameter estimation: incorporation of censored measurements (e.g., below detection limit, or ones that are above the sensitive range of the measurement device) yield to more realistic spatial models; the proportion of true zeros can be jointly estimated with and distinguished from censored measurements which allow estimates about the age of a contaminant in the system; secondary information (categorical and on the rational scale) has been used to improve the estimation of the primary variable; These copula based multivariate statistical techniques are demonstrated based on hydraulic conductivity observations at the Borden (Canada) site, the MADE site (USA), and a large regional groundwater quality data-set in south-west Germany. Fields of spatially distributed K were simulated with identical marginal simulation, identical second order spatial moments, yet substantially differing solute transport characteristics when numerical tracer tests were performed. A statistical methodology is shown that allows the delineation of a boundary layer separating homogenous parts of a spatial data-set. The effects of this boundary layer (macro structure) and the spatial dependence of K (micro structure) on solute transport behaviour is shown.
Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.
2013-01-01
Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413
2015-01-01
An immersion Raman probe was used in emulsion copolymerization reactions to measure monomer concentrations and particle sizes. Quantitative determination of monomer concentrations is feasible in two-monomer copolymerizations, but only the overall conversion could be measured by Raman spectroscopy in a four-monomer copolymerization. The feasibility of measuring monomer conversion and particle size was established using partial least-squares (PLS) calibration models. A simplified theoretical framework for the measurement of particle sizes based on photon scattering is presented, based on the elastic-sphere-vibration and surface-tension models. PMID:26900256
Scenario Analysis of Soil and Water Conservation in Xiejia Watershed Based on Improved CSLE Model
NASA Astrophysics Data System (ADS)
Liu, Jieying; Yu, Ming; Wu, Yong; Huang, Yao; Nie, Yawen
2018-01-01
According to the existing research results and related data, use the scenario analysis method, to evaluate the effects of different soil and water conservation measures on soil erosion in a small watershed. Based on the analysis of soil erosion scenarios and model simulation budgets in the study area, it is found that all scenarios simulated soil erosion rates are lower than the present situation of soil erosion in 2013. Soil and water conservation measures are more effective in reducing soil erosion than soil and water conservation biological measures and soil and water conservation tillage measures.
Improving software maintenance through measurement
NASA Technical Reports Server (NTRS)
Rombach, H. Dieter; Ulery, Bradford T.
1989-01-01
A practical approach to improving software maintenance through measurements is presented. This approach is based on general models for measurement and improvement. Both models, their integration, and practical guidelines for transferring them into industrial maintenance settings are presented. Several examples of applications of the approach to real-world maintenance environments are discussed.
ERIC Educational Resources Information Center
Maksl, Adam; Ashley, Seth; Craft, Stephanie
2015-01-01
News media literacy refers to the knowledge and motivations needed to identify and engage with journalism. This study measured levels of news media literacy among 500 teenagers using a new scale measure based on Potter's model of media literacy and adapted to news media specifically. The adapted model posits that news media literate individuals…
USDA-ARS?s Scientific Manuscript database
Passive capillary lysimeters (PCLs) are uniquely suited for measuring water fluxes in variably-saturated soils. The objective of this work was to compare PCL flux measurements with simulated fluxes obtained with a calibrated unsaturated flow model. The Richards equation-based model was calibrated us...
Song, Zirui; Rose, Sherri; Chernew, Michael E; Safran, Dana Gelb
2017-01-01
As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations. Project HOPE—The People-to-People Health Foundation, Inc.
Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S
2013-06-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.
Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.
2011-01-01
Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
Local Difference Measures between Complex Networks for Dynamical System Model Evaluation
Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374
Local difference measures between complex networks for dynamical system model evaluation.
Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.
A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Cerjan, C. J.; Marinak, M. M.
A detailed simulation-based model of the June 2011 National Ignition Campaign cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60. Simulatedmore » experimental values were extracted from the simulation and compared against the experiment. Although by design the model is able to reproduce the 1D in-flight implosion parameters and low-mode asymmetries, it is not able to accurately predict the measured and inferred stagnation properties and levels of mix. In particular, the measured yields were 15%-40% of the calculated yields, and the inferred stagnation pressure is about 3 times lower than simulated.« less
Measurement of a model of implementation for health care: toward a testable theory
2012-01-01
Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures. PMID:22759451
Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory
NASA Astrophysics Data System (ADS)
Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng
The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods
ERIC Educational Resources Information Center
Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan
2008-01-01
This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…
ERIC Educational Resources Information Center
Yeo, Seungsoo; Park, Sohee
2014-01-01
The purpose of this study was to examine the developmental difference in curriculum-based measurement (CBM) reading aloud performance between Grade 8 English-speaking students and English language learners (ELLs) using two theories of reading development: compensatory model and cumulative model. Fifty non-ELLs and 133 ELLs were administered the…
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.
2012-04-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less
Data envelopment analysis in service quality evaluation: an empirical study
NASA Astrophysics Data System (ADS)
Najafi, Seyedvahid; Saati, Saber; Tavana, Madjid
2015-09-01
Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfaction, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non-linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and consequently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Comparison of Nurse Staffing Measurements in Staffing-Outcomes Research.
Park, Shin Hye; Blegen, Mary A; Spetz, Joanne; Chapman, Susan A; De Groot, Holly A
2015-01-01
Investigators have used a variety of operational definitions of nursing hours of care in measuring nurse staffing for health services research. However, little is known about which approach is best for nurse staffing measurement. To examine whether various nursing hours measures yield different model estimations when predicting patient outcomes and to determine the best method to measure nurse staffing based on the model estimations. We analyzed data from the University HealthSystem Consortium for 2005. The sample comprised 208 hospital-quarter observations from 54 hospitals, representing information on 971 adult-care units and about 1 million inpatient discharges. We compared regression models using different combinations of staffing measures based on productive/nonproductive and direct-care/indirect-care hours. Akaike Information Criterion and Bayesian Information Criterion were used in the assessment of staffing measure performance. The models that included the staffing measure calculated from productive hours by direct-care providers were best, in general. However, the Akaike Information Criterion and Bayesian Information Criterion differences between models were small, indicating that distinguishing nonproductive and indirect-care hours from productive direct-care hours does not substantially affect the approximation of the relationship between nurse staffing and patient outcomes. This study is the first to explicitly evaluate various measures of nurse staffing. Productive hours by direct-care providers are the strongest measure related to patient outcomes and thus should be preferred in research on nurse staffing and patient outcomes.
Using entropy measures to characterize human locomotion.
Leverick, Graham; Szturm, Tony; Wu, Christine Q
2014-12-01
Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Modeling and validation of spectral BRDF on material surface of space target
NASA Astrophysics Data System (ADS)
Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei
2014-11-01
The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.
SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual
USDA-ARS?s Scientific Manuscript database
Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...
Lee, K R; Dipaolo, B; Ji, X
2000-06-01
Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.
Estimating Biases for Regional Methane Fluxes using Co-emitted Tracers
NASA Astrophysics Data System (ADS)
Bambha, R.; Safta, C.; Michelsen, H. A.; Cui, X.; Jeong, S.; Fischer, M. L.
2017-12-01
Methane is a powerful greenhouse gas, and the development and improvement of emissions models rely on understanding the flux of methane released from anthropogenic sources relative to releases from other sources. Increasing production of shale oil and gas in the mid-latitudes and associated fugitive emissions are suspected to be a dominant contributor to the global methane increase. Landfills, sewage treatment, and other sources may be dominant sources in some parts of the U.S. Large discrepancies between emissions models present a great challenge to reconciling atmospheric measurements with inventory-based estimates for various emissions sectors. Current approaches for measuring regional emissions yield highly uncertain estimates because of the sparsity of measurement sites and the presence of multiple simultaneous sources. Satellites can provide wide spatial coverage at the expense of much lower measurement precision compared to ground-based instruments. Methods for effective assimilation of data from a variety of sources are critically needed to perform regional GHG attribution with existing measurements and to determine how to structure future measurement systems including satellites. We present a hierarchical Bayesian framework to estimate surface methane fluxes based on atmospheric concentration measurements and a Lagrangian transport model (Weather Research and Forecasting and Stochastic Time-Inverted Lagrangian Transport). Structural errors in the transport model are estimated with the help of co-emitted traces species with well defined decay rates. We conduct the analyses at regional scales that are based on similar geographical and meteorological conditions. For regions where data are informative, we further refine flux estimates by emissions sector and infer spatially and temporally varying biases parameterized as spectral random field representations.
Solution algorithm of dwell time in slope-based figuring model
NASA Astrophysics Data System (ADS)
Li, Yong; Zhou, Lin
2017-10-01
Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.
Wang, Gang; Zhao, Zhikai; Ning, Yongjie
2018-05-28
As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Strauss, Daniel J; Delb, Wolfgang; D'Amelio, Roberto; Low, Yin Fen; Falkai, Peter
2008-02-01
Large-scale neural correlates of the tinnitus decompensation might be used for an objective evaluation of therapies and neurofeedback based therapeutic approaches. In this study, we try to identify large-scale neural correlates of the tinnitus decompensation using wavelet phase stability criteria of single sweep sequences of late auditory evoked potentials as synchronization stability measure. The extracted measure provided an objective quantification of the tinnitus decompensation and allowed for a reliable discrimination between a group of compensated and decompensated tinnitus patients. We provide an interpretation for our results by a neural model of top-down projections based on the Jastreboff tinnitus model combined with the adaptive resonance theory which has not been applied to model tinnitus so far. Using this model, our stability measure of evoked potentials can be linked to the focus of attention on the tinnitus signal. It is concluded that the wavelet phase stability of late auditory evoked potential single sweeps might be used as objective tinnitus decompensation measure and can be interpreted in the framework of the Jastreboff tinnitus model and adaptive resonance theory.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Novel model of a AlGaN/GaN high electron mobility transistor based on an artificial neural network
NASA Astrophysics Data System (ADS)
Cheng, Zhi-Qun; Hu, Sha; Liu, Jun; Zhang, Qi-Jun
2011-03-01
In this paper we present a novel approach to modeling AlGaN/GaN high electron mobility transistor (HEMT) with an artificial neural network (ANN). The AlGaN/GaN HEMT device structure and its fabrication process are described. The circuit-based Neuro-space mapping (neuro-SM) technique is studied in detail. The EEHEMT model is implemented according to the measurement results of the designed device, which serves as a coarse model. An ANN is proposed to model AlGaN/GaN HEMT based on the coarse model. Its optimization is performed. The simulation results from the model are compared with the measurement results. It is shown that the simulation results obtained from the ANN model of AlGaN/GaN HEMT are more accurate than those obtained from the EEHEMT model. Project supported by the National Natural Science Foundation of China (Grant No. 60776052).
Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system.
Song, H; Fraanje, R; Schitter, G; Kroese, H; Vdovin, G; Verhaegen, M
2010-11-08
In many scientific and medical applications, such as laser systems and microscopes, wavefront-sensor-less (WFSless) adaptive optics (AO) systems are used to improve the laser beam quality or the image resolution by correcting the wavefront aberration in the optical path. The lack of direct wavefront measurement in WFSless AO systems imposes a challenge to achieve efficient aberration correction. This paper presents an aberration correction approach for WFSlss AO systems based on the model of the WFSless AO system and a small number of intensity measurements, where the model is identified from the input-output data of the WFSless AO system by black-box identification. This approach is validated in an experimental setup with 20 static aberrations having Kolmogorov spatial distributions. By correcting N=9 Zernike modes (N is the number of aberration modes), an intensity improvement from 49% of the maximum value to 89% has been achieved in average based on N+5=14 intensity measurements. With the worst initial intensity, an improvement from 17% of the maximum value to 86% has been achieved based on N+4=13 intensity measurements.
An acoustic glottal source for vocal tract physical models
NASA Astrophysics Data System (ADS)
Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti
2017-11-01
A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.
A new leakage measurement method for damaged seal material
NASA Astrophysics Data System (ADS)
Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng
2018-07-01
In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.
Entropy Measurement for Biometric Verification Systems.
Lim, Meng-Hui; Yuen, Pong C
2016-05-01
Biometric verification systems are designed to accept multiple similar biometric measurements per user due to inherent intrauser variations in the biometric data. This is important to preserve reasonable acceptance rate of genuine queries and the overall feasibility of the recognition system. However, such acceptance of multiple similar measurements decreases the imposter's difficulty of obtaining a system-acceptable measurement, thus resulting in a degraded security level. This deteriorated security needs to be measurable to provide truthful security assurance to the users. Entropy is a standard measure of security. However, the entropy formula is applicable only when there is a single acceptable possibility. In this paper, we develop an entropy-measuring model for biometric systems that accepts multiple similar measurements per user. Based on the idea of guessing entropy, the proposed model quantifies biometric system security in terms of adversarial guessing effort for two practical attacks. Excellent agreement between analytic and experimental simulation-based measurement results on a synthetic and a benchmark face dataset justify the correctness of our model and thus the feasibility of the proposed entropy-measuring approach.
Wang, Hongyuan; Zhang, Wei; Dong, Aotuo
2012-11-10
A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.
Gagné, Mathieu; Moore, Lynne; Beaudoin, Claudia; Batomen Kuimi, Brice Lionel; Sirois, Marie-Josée
2016-03-01
The International Classification of Diseases (ICD) is the main classification system used for population-based injury surveillance activities but does not contain information on injury severity. ICD-based injury severity measures can be empirically derived or mapped, but no single approach has been formally recommended. This study aimed to compare the performance of ICD-based injury severity measures to predict in-hospital mortality among injury-related admissions. A systematic review and a meta-analysis were conducted. MEDLINE, EMBASE, and Global Health databases were searched from their inception through September 2014. Observational studies that assessed the performance of ICD-based injury severity measures to predict in-hospital mortality and reported discriminative ability using the area under a receiver operating characteristic curve (AUC) were included. Metrics of model performance were extracted. Pooled AUC were estimated under random-effects models. Twenty-two eligible studies reported 72 assessments of discrimination on ICD-based injury severity measures. Reported AUC ranged from 0.681 to 0.958. Of the 72 assessments, 46 showed excellent (0.80 ≤ AUC < 0.90) and 6 outstanding (AUC ≥ 0.90) discriminative ability. Pooled AUC for ICD-based Injury Severity Score (ICISS) based on the product of traditional survival proportions was significantly higher than measures based on ICD mapped to Abbreviated Injury Scale (AIS) scores (0.863 vs. 0.825 for ICDMAP-ISS [p = 0.005] and ICDMAP-NISS [p = 0.016]). Similar results were observed when studies were stratified by the type of data used (trauma registry or hospital discharge) or the provenance of survival proportions (internally or externally derived). However, among studies published after 2003 the Trauma Mortality Prediction Model based on ICD-9 codes (TMPM-9) demonstrated superior discriminative ability than ICISS using the product of traditional survival proportions (0.850 vs. 0.802, p = 0.002). Models generally showed poor calibration. ICISS using the product of traditional survival proportions and TMPM-9 predict mortality more accurately than those mapped to AIS codes and should be preferred for describing injury severity when ICD is used to record injury diagnoses. Systematic review and meta-analysis, level III.
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation ...
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation ...
Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation ...
NASA Astrophysics Data System (ADS)
Liu, Q.
2011-09-01
At first, research advances on radiation transfer modeling on multi-scale remote sensing data are presented: after a general overview of remote sensing radiation transfer modeling, several recent research advances are presented, including leaf spectrum model (dPROS-PECT), vegetation canopy BRDF models, directional thermal infrared emission models(TRGM, SLEC), rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed. The land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation etc. are taken as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is designed and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China will be introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart
2010-05-28
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
NASA Astrophysics Data System (ADS)
Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman
2010-05-01
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.
Validation of radiocarpal joint contact models based on images from a clinical MRI scanner.
Johnson, Joshua E; McIff, Terence E; Lee, Phil; Toby, E Bruce; Fischer, Kenneth J
2014-01-01
This study was undertaken to assess magnetic resonance imaging (MRI)-based radiocarpal surface contact models of functional loading in a clinical MRI scanner for future in vivo studies, by comparison with experimental measures from three cadaver forearm specimens. Experimental data were acquired using a Tekscan sensor during simulated light grasp. Magnetic resonance (MR) images were used to obtain model geometry and kinematics (image registration). Peak contact pressures (PPs) and average contact pressures (APs), contact forces and contact areas were determined in the radiolunate and radioscaphoid joints. Contact area was also measured directly from MR images acquired with load and compared with model data. Based on the validation criteria (within 25% of experimental data), out of the six articulations (three specimens with two articulations each), two met the criterion for AP (0%, 14%); one for peak pressure (20%); one for contact force (5%); four for contact area with respect to experiment (8%, 13%, 19% and 23%), and three contact areas met the criterion with respect to direct measurements (14%, 21% and 21%). Absolute differences between model and experimental PPs were reasonably low (within 2.5 MPa). Overall, the results indicate that MRI-based models generated from 3T clinical MR scanner appear sufficient to obtain clinically relevant data.
Commentary on New Metrics, Measures, and Uses for Fluency Data
ERIC Educational Resources Information Center
Christ, Theodore J.; Ardoin, Scott P.
2015-01-01
Fluency and rate-based assessments, such as curriculum-based measurement, are frequently used to screen and evaluate student progress. The application of such measures are especially prevalent within special education and response to intervention models of prevention and early intervention. Although there is an extensive research and professional…
Friston, Karl J.; Bastos, André M.; Oswal, Ashwini; van Wijk, Bernadette; Richter, Craig; Litvak, Vladimir
2014-01-01
This technical paper offers a critical re-evaluation of (spectral) Granger causality measures in the analysis of biological timeseries. Using realistic (neural mass) models of coupled neuronal dynamics, we evaluate the robustness of parametric and nonparametric Granger causality. Starting from a broad class of generative (state-space) models of neuronal dynamics, we show how their Volterra kernels prescribe the second-order statistics of their response to random fluctuations; characterised in terms of cross-spectral density, cross-covariance, autoregressive coefficients and directed transfer functions. These quantities in turn specify Granger causality — providing a direct (analytic) link between the parameters of a generative model and the expected Granger causality. We use this link to show that Granger causality measures based upon autoregressive models can become unreliable when the underlying dynamics is dominated by slow (unstable) modes — as quantified by the principal Lyapunov exponent. However, nonparametric measures based on causal spectral factors are robust to dynamical instability. We then demonstrate how both parametric and nonparametric spectral causality measures can become unreliable in the presence of measurement noise. Finally, we show that this problem can be finessed by deriving spectral causality measures from Volterra kernels, estimated using dynamic causal modelling. PMID:25003817
Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?
2017-01-01
Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692
Accurate prediction of energy expenditure using a shoe-based activity monitor.
Sazonova, Nadezhda; Browning, Raymond C; Sazonov, Edward
2011-07-01
The aim of this study was to develop and validate a method for predicting energy expenditure (EE) using a footwear-based system with integrated accelerometer and pressure sensors. We developed a footwear-based device with an embedded accelerometer and insole pressure sensors for the prediction of EE. The data from the device can be used to perform accurate recognition of major postures and activities and to estimate EE using the acceleration, pressure, and posture/activity classification information in a branched algorithm without the need for individual calibration. We measured EE via indirect calorimetry as 16 adults (body mass index=19-39 kg·m) performed various low- to moderate-intensity activities and compared measured versus predicted EE using several models based on the acceleration and pressure signals. Inclusion of pressure data resulted in better accuracy of EE prediction during static postures such as sitting and standing. The activity-based branched model that included predictors from accelerometer and pressure sensors (BACC-PS) achieved the lowest error (e.g., root mean squared error (RMSE)=0.69 METs) compared with the accelerometer-only-based branched model BACC (RMSE=0.77 METs) and nonbranched model (RMSE=0.94-0.99 METs). Comparison of EE prediction models using data from both legs versus models using data from a single leg indicates that only one shoe needs to be equipped with sensors. These results suggest that foot acceleration combined with insole pressure measurement, when used in an activity-specific branched model, can accurately estimate the EE associated with common daily postures and activities. The accuracy and unobtrusiveness of a footwear-based device may make it an effective physical activity monitoring tool.
Prioritizing Measures of Digital Patient Engagement: A Delphi Expert Panel Study
2017-01-01
Background Establishing a validated scale of patient engagement through use of information technology (ie, digital patient engagement) is the first step to understanding its role in health and health care quality, outcomes, and efficient implementation by health care providers and systems. Objective The aim of this study was to develop and prioritize measures of digital patient engagement based on patients’ use of the US Department of Veterans Affairs (VA)’s MyHealtheVet (MHV) portal, focusing on the MHV/Blue Button and Secure Messaging functions. Methods We aligned two models from the information systems and organizational behavior literatures to create a theory-based model of digital patient engagement. On the basis of this model, we conducted ten key informant interviews to identify potential measures from existing VA studies and consolidated the measures. We then conducted three rounds of modified Delphi rating by 12 national eHealth experts via Web-based surveys to prioritize the measures. Results All 12 experts completed the study’s three rounds of modified Delphi ratings, resulting in two sets of final candidate measures representing digital patient engagement for Secure Messaging (58 measures) and MHV/Blue Button (71 measures). These measure sets map to Donabedian’s three types of quality measures: (1) antecedents (eg, patient demographics); (2) processes (eg, a novel measure of Web-based care quality); and (3) outcomes (eg, patient engagement). Conclusions This national expert panel study using a modified Delphi technique prioritized candidate measures to assess digital patient engagement through patients’ use of VA’s My HealtheVet portal. The process yielded two robust measures sets prepared for future piloting and validation in surveys among Veterans. PMID:28550008
Prioritizing Measures of Digital Patient Engagement: A Delphi Expert Panel Study.
Garvin, Lynn A; Simon, Steven R
2017-05-26
Establishing a validated scale of patient engagement through use of information technology (ie, digital patient engagement) is the first step to understanding its role in health and health care quality, outcomes, and efficient implementation by health care providers and systems. The aim of this study was to develop and prioritize measures of digital patient engagement based on patients' use of the US Department of Veterans Affairs (VA)'s MyHealtheVet (MHV) portal, focusing on the MHV/Blue Button and Secure Messaging functions. We aligned two models from the information systems and organizational behavior literatures to create a theory-based model of digital patient engagement. On the basis of this model, we conducted ten key informant interviews to identify potential measures from existing VA studies and consolidated the measures. We then conducted three rounds of modified Delphi rating by 12 national eHealth experts via Web-based surveys to prioritize the measures. All 12 experts completed the study's three rounds of modified Delphi ratings, resulting in two sets of final candidate measures representing digital patient engagement for Secure Messaging (58 measures) and MHV/Blue Button (71 measures). These measure sets map to Donabedian's three types of quality measures: (1) antecedents (eg, patient demographics); (2) processes (eg, a novel measure of Web-based care quality); and (3) outcomes (eg, patient engagement). This national expert panel study using a modified Delphi technique prioritized candidate measures to assess digital patient engagement through patients' use of VA's My HealtheVet portal. The process yielded two robust measures sets prepared for future piloting and validation in surveys among Veterans. ©Lynn A Garvin, Steven R Simon. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.05.2017.
Context-based virtual metrology
NASA Astrophysics Data System (ADS)
Ebersbach, Peter; Urbanowicz, Adam M.; Likhachev, Dmitriy; Hartig, Carsten; Shifrin, Michael
2018-03-01
Hybrid and data feed forward methodologies are well established for advanced optical process control solutions in highvolume semiconductor manufacturing. Appropriate information from previous measurements, transferred into advanced optical model(s) at following step(s), provides enhanced accuracy and exactness of the measured topographic (thicknesses, critical dimensions, etc.) and material parameters. In some cases, hybrid or feed-forward data are missed or invalid for dies or for a whole wafer. We focus on approaches of virtual metrology to re-create hybrid or feed-forward data inputs in high-volume manufacturing. We discuss missing data inputs reconstruction which is based on various interpolation and extrapolation schemes and uses information about wafer's process history. Moreover, we demonstrate data reconstruction approach based on machine learning techniques utilizing optical model and measured spectra. And finally, we investigate metrics that allow one to assess error margin of virtual data input.
Modeling and experimental study on near-field acoustic levitation by flexural mode.
Liu, Pinkuan; Li, Jin; Ding, Han; Cao, Wenwu
2009-12-01
Near-field acoustic levitation (NFAL) has been used in noncontact handling and transportation of small objects to avoid contamination. We have performed a theoretical analysis based on nonuniform vibrating surface to quantify the levitation force produced by the air film and also conducted experimental tests to verify our model. Modal analysis was performed using ANSYS on the flexural plate radiator to obtain its natural frequency of desired mode, which is used to design the measurement system. Then, the levitation force was calculated as a function of levitation distance based on squeeze gas film theory using measured amplitude and phase distributions on the vibrator surface. Compared with previous fluid-structural analyses using a uniform piston motion, our model based on the nonuniform radiating surface of the vibrator is more realistic and fits better with experimentally measured levitation force.
Evaluation of theoretical and empirical water vapor sorption isotherm models for soils
NASA Astrophysics Data System (ADS)
Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.
2016-01-01
The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.
Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.
Pourmoghaddas, Amir; Wells, R Glenn
2016-11-01
Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for 99m Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter projections (120 ± 6 keV ) were also extracted from the acquired list-mode SPECT data. Either APD or DEW scatter projections were subtracted from corresponding 140 keV measured projections and then reconstructed with AC (APD-SC and DEW-SC). Quantitative accuracy of the activity measured in the heart for the APD-SC and DEW-SC images was assessed against dose calibrator measurements. The difference between modeled and acquired projections was measured as the root-mean-squared-error (RMSE). APD-modeled projections for a clinical cardiac study were also evaluated. APD-modeled projections showed good agreement with SPECT measurements and had reduced noise compared to DEW scatter estimates. APD-SC reduced mean error in activity measurement compared to DEW-SC in images and the reduction was statistically significant where the scatter fraction (SF) was large (mean SF = 28.5%, T-test p = 0.007). APD-SC reduced measurement uncertainties as well; however, the difference was not found to be statistically significant (F-test p > 0.5). RMSE comparisons showed that elevated levels of scatter did not significantly contribute to a change in RMSE (p > 0.2). Model-based APD scatter estimation is feasible for dedicated cardiac SPECT scanners with pinhole collimators. APD-SC images performed better than DEW-SC images and improved the accuracy of activity measurement in high-scatter scenarios.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Modeling of Aerosol Optical Depth Variability during the 1998 Canadian Forest Fire Smoke Event
NASA Astrophysics Data System (ADS)
Aubé, M.; O`Neill, N. T.; Royer, A.; Lavoué, D.
2003-04-01
Monitoring of aerosol optical depth (AOD) is of particular importance due to the significant role of aerosols in the atmospheric radiative budget. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as based DDV (Dense Dark Vegetation) inversion algorithms which extract AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new methodology that links AOD measurements and particulate matter Transport Model using a data assimilation approach. This modelling package (AODSEM for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian-Eulerian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution is tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important but crude parameter. We applied this methodology to a significant smoke event that occurred over Canada in august 1998. The results show the potential of this approach inasmuch as residuals between AODSEM assimilated analysis and measurements are smaller than typical errors associated to remotely sensed AOD (satellite or ground based). The AODSEM assimilation approach also gives better results than classical interpolation techniques. This improvement is especially evident when the available number of AOD measurements is small.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
NASA Astrophysics Data System (ADS)
Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.
2011-12-01
As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.
2011-01-01
With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455
Sea spray aerosol fluxes in the Baltic Sea region: Comparison of the WAM model with measurements
NASA Astrophysics Data System (ADS)
Markuszewski, Piotr; Kosecki, Szymon; Petelski, Tomasz
2017-08-01
Sea spray aerosol flux is an important element of sub-regional climate modeling. The majority of works related to this topic concentrate on open ocean research rather than on smaller, inland seas, e.g., the Baltic Sea. The Baltic Sea is one of the largest brackish inland seas by area, where major inflows of oceanic waters are rare. Furthermore, surface waves in the Baltic Sea have a relatively shorter lifespan in comparison with oceanic waves. Therefore, emission of sea spray aerosol may differ greatly from what is known from oceanic research and should be investigated. This article presents a comparison of sea spray aerosol measurements carried out on-board the s/y Oceania research ship with data calculated in accordance to the WAM model. The measurements were conducted in the southern region of the Baltic Sea during four scientific cruises. The gradient method was used to determinate aerosol fluxes. The fluxes were calculated for particles of diameter in range of 0.5-47 μm. The correlation between wind speed measured and simulated has a good agreement (correlation in range of 0.8). The comparison encompasses three different sea spray generation models. First, function proposed by Massel (2006) which is based only on wave parameters, such as significant wave height and peak frequency. Second, Callaghan (2013) which is based on Gong (2003) model (wind speed relation), and a thorough experimental analysis of whitecaps. Third, Petelski et al. (2014) which is based on in-situ gradient measurements with the function dependent on wind speed. The two first models which based on whitecaps analysis are insufficient. Moreover, the research shows strong relation between aerosol emission and wind speed history.
NASA Astrophysics Data System (ADS)
Han, Y.; Misra, S.
2018-04-01
Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir
2014-05-01
Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.
ERIC Educational Resources Information Center
Forster, Natalie; Souvignier, Elmar
2011-01-01
The purpose of this study was to examine the technical adequacy of a computer-based assessment instrument which is based on hierarchical models of text comprehension for monitoring student reading progress following the Curriculum-Based Measurement (CBM) approach. At intervals of two weeks, 120 third-grade students finished eight CBM tests. To…
Study of Vis/NIR spectroscopy measurement on acidity of yogurt
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli
2006-09-01
A fast measurement of pH of yogurt using Vis/NIR-spectroscopy techniques was established in order to measuring the acidity of yogurt rapidly. 27 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The pH of yogurt on positions scanned by spectrum was measured by a pH meter. The mathematical model between pH and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS) by using Unscramble V9.2. Then 25 unknown samples from 5 different brands were predicted based on the mathematical model. The result shows that The correlation coefficient of pH based on PLS model is more than 0.890, and standard error of calibration (SEC) is 0.037, standard error of prediction (SEP) is 0.043. Through predicting the pH of 25 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0918. The results show the good to excellent prediction performances. The Vis/NIR spectroscopy technique had a significant greater accuracy for determining the value of pH. It was concluded that the VisINIRS measurement technique can be used to measure pH of yogurt fast and accurately, and a new method for the measurement of pH of yogurt was established.
ERIC Educational Resources Information Center
Toral, S. L.; Barrero, F.; Martinez-Torres, M. R.
2007-01-01
This paper presents an exploratory study about the development of a structural and measurement model for the technological acceptance (TAM) of a web-based educational tool. The aim consists of measuring not only the use of this tool, but also the external variables with a significant influence in its use for planning future improvements. The tool,…
Karadağ, Teoman; Yüceer, Mehmet; Abbasov, Teymuraz
2016-01-01
The present study analyses the electric field radiating from the GSM/UMTS base stations located in central Malatya, a densely populated urban area in Turkey. The authors have conducted both instant and continuous measurements of high-frequency electromagnetic fields throughout their research by using non-ionising radiation-monitoring networks. Over 15,000 instant and 13,000,000 continuous measurements were taken throughout the process. The authors have found that the normal electric field radiation can increase ∼25% during daytime, depending on mobile communication traffic. The authors' research work has also demonstrated the fact that the electric field intensity values can be modelled for each hour, day or week with the results obtained from continuous measurements. The authors have developed an estimation model based on these values, including mobile communication traffic (Erlang) values obtained from mobile phone base stations and the temperature and humidity values in the environment. The authors believe that their proposed artificial neural network model and multivariable least-squares regression analysis will help predict the electric field intensity in an environment in advance. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Conceptual model of comprehensive research metrics for improved human health and environment.
Engel-Cox, Jill A; Van Houten, Bennett; Phelps, Jerry; Rose, Shyanika W
2008-05-01
Federal, state, and private research agencies and organizations have faced increasing administrative and public demand for performance measurement. Historically, performance measurement predominantly consisted of near-term outputs measured through bibliometrics. The recent focus is on accountability for investment based on long-term outcomes. Developing measurable outcome-based metrics for research programs has been particularly challenging, because of difficulty linking research results to spatially and temporally distant outcomes. Our objective in this review is to build a logic model and associated metrics through which to measure the contribution of environmental health research programs to improvements in human health, the environment, and the economy. We used expert input and literature research on research impact assessment. With these sources, we developed a logic model that defines the components and linkages between extramural environmental health research grant programs and the outputs and outcomes related to health and social welfare, environmental quality and sustainability, economics, and quality of life. The logic model focuses on the environmental health research portfolio of the National Institute of Environmental Health Sciences (NIEHS) Division of Extramural Research and Training. The model delineates pathways for contributions by five types of institutional partners in the research process: NIEHS, other government (federal, state, and local) agencies, grantee institutions, business and industry, and community partners. The model is being applied to specific NIEHS research applications and the broader research community. We briefly discuss two examples and discuss the strengths and limits of outcome-based evaluation of research programs.
Nonlinearity measure and internal model control based linearization in anti-windup design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perev, Kamen
2013-12-18
This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequencymore » ranges.« less
Integrated modeling and heat treatment simulation of austempered ductile iron
NASA Astrophysics Data System (ADS)
Hepp, E.; Hurevich, V.; Schäfer, W.
2012-07-01
The integrated modeling and simulation of the casting and heat treatment processes for producing austempered ductile iron (ADI) castings is presented. The focus is on describing different models to simulate the austenitization, quenching and austempering steps during ADI heat treatment. The starting point for the heat treatment simulation is the simulated microstructure after solidification and cooling. The austenitization model considers the transformation of the initial ferrite-pearlite matrix into austenite as well as the dissolution of graphite in austenite to attain a uniform carbon distribution. The quenching model is based on measured CCT diagrams. Measurements have been carried out to obtain these diagrams for different alloys with varying Cu, Ni and Mo contents. The austempering model includes nucleation and growth kinetics of the ADI matrix. The model of ADI nucleation is based on experimental measurements made for varied Cu, Ni, Mo contents and austempering temperatures. The ADI kinetic model uses a diffusion controlled approach to model the growth. The models have been integrated in a tool for casting process simulation. Results are shown for the optimization of the heat treatment process of a planetary carrier casting.
Survey of in-situ and remote sensing methods for soil moisture determination
NASA Technical Reports Server (NTRS)
Schmugge, T. J.; Jackson, T. J.; Mckim, H. L.
1981-01-01
General methods for determining the moisture content in the surface layers of the soil based on in situ or point measurements, soil water models and remote sensing observations are surveyed. In situ methods described include gravimetric techniques, nuclear techniques based on neutron scattering or gamma-ray attenuation, electromagnetic techniques, tensiometric techniques and hygrometric techniques. Soil water models based on column mass balance treat soil moisture contents as a result of meteorological inputs (precipitation, runoff, subsurface flow) and demands (evaporation, transpiration, percolation). The remote sensing approaches are based on measurements of the diurnal range of surface temperature and the crop canopy temperature in the thermal infrared, measurements of the radar backscattering coefficient in the microwave region, and measurements of microwave emission or brightness temperature. Advantages and disadvantages of the various methods are pointed out, and it is concluded that a successful monitoring system must incorporate all of the approaches considered.
ERIC Educational Resources Information Center
Yeo, Seungsoo; Fearrington, Jamie; Christ, Theodore J.
2011-01-01
This study investigated slope bias on student background variables for both Curriculum Based Measurement of Oral Reading (CBM-R) and Curriculum Based Measurement Maze Reading (Maze). Benchmark scores from 1,738 students in Grades 3 through 8 were used to examine potential slope bias in CBM-R and Maze. Latent growth modeling was used to both…
Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments
NASA Astrophysics Data System (ADS)
Pozniak, Krzysztof T.
2007-08-01
Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.
NASA Astrophysics Data System (ADS)
Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela
2015-10-01
An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.
Havas, K A; Boone, R B; Hill, A E; Salman, M D
2014-06-01
Brucellosis has been reported in livestock and humans in the country of Georgia with Brucella melitensis as the most common species causing disease. Georgia lacked sufficient data to assess effectiveness of the various potential control measures utilizing a reliable population-based simulation model of animal-to-human transmission of this infection. Therefore, an agent-based model was built using data from previous studies to evaluate the effect of an animal-level infection control programme on human incidence and sheep flock and cattle herd prevalence of brucellosis in the Kakheti region of Georgia. This model simulated the patterns of interaction of human-animal workers, sheep flocks and cattle herds with various infection control measures and returned population-based data. The model simulates the use of control measures needed for herd and flock prevalence to fall below 2%. As per the model output, shepherds had the greatest disease reduction as a result of the infection control programme. Cattle had the greatest influence on the incidence of human disease. Control strategies should include all susceptible animal species, sheep and cattle, identify the species of brucellosis present in the cattle population and should be conducted at the municipality level. This approach can be considered as a model to other countries and regions when assessment of control strategies is needed but data are scattered. © 2013 Blackwell Verlag GmbH.
A General Multidimensional Model for the Measurement of Cultural Differences.
ERIC Educational Resources Information Center
Olmedo, Esteban L.; Martinez, Sergio R.
A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…
Statistical modelling as an aid to the design of retail sampling plans for mycotoxins in food.
MacArthur, Roy; MacDonald, Susan; Brereton, Paul; Murray, Alistair
2006-01-01
A study has been carried out to assess appropriate statistical models for use in evaluating retail sampling plans for the determination of mycotoxins in food. A compound gamma model was found to be a suitable fit. A simulation model based on the compound gamma model was used to produce operating characteristic curves for a range of parameters relevant to retail sampling. The model was also used to estimate the minimum number of increments necessary to minimize the overall measurement uncertainty. Simulation results showed that measurements based on retail samples (for which the maximum number of increments is constrained by cost) may produce fit-for-purpose results for the measurement of ochratoxin A in dried fruit, but are unlikely to do so for the measurement of aflatoxin B1 in pistachio nuts. In order to produce a more accurate simulation, further work is required to determine the degree of heterogeneity associated with batches of food products. With appropriate parameterization in terms of physical and biological characteristics, the systems developed in this study could be applied to other analyte/matrix combinations.
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
NASA Astrophysics Data System (ADS)
Sharma, Manju; Sharma, Veena; Kumar, Sanjeev; Puri, S.; Singh, Nirmal
2006-11-01
The M ξ, M αβ, M γ and M m X-ray production (XRP) cross-sections have been measured for the elements with 71⩽ Z⩽92 at 5.96 keV incident photon energy satisfying EM1< Einc< EL3, where EM1(L3) is the M 1(L 3) subshell binding energy. These XRP cross-sections have been calculated using photoionization cross-sections based on the relativistic Dirac-Hartree-Slater (RDHS) model with three sets of X-ray emission rates, fluorescence, Coster-Kronig and super Coster-Kronig yields based on (i) the non-relativistic Hartree-Slater (NRHS) potential model, (ii) the RDHS model and (iii) the relativistic Dirac-Fock (RDF) model. For the third set, the M i ( i=1-5) subshell fluorescence yields have been calculated using the RDF model-based X-ray emission rates and total widths reevaluated to incorporate the RDF model-based radiative widths. The measured cross-sections have been compared with the calculated values to check the applicability of the physical parameters based on different models.
NASA Astrophysics Data System (ADS)
Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł
2018-01-01
The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.
Comparison of simulation modeling and satellite techniques for monitoring ecological processes
NASA Technical Reports Server (NTRS)
Box, Elgene O.
1988-01-01
In 1985 improvements were made in the world climatic data base for modeling and predictive mapping; in individual process models and the overall carbon-balance models; and in the interface software for mapping the simulation results. Statistical analysis of the data base was begun. In 1986 mapping was shifted to NASA-Goddard. The initial approach involving pattern comparisons was modified to a more statistical approach. A major accomplishment was the expansion and improvement of a global data base of measurements of biomass and primary production, to complement the simulation data. The main accomplishments during 1987 included: production of a master tape with all environmental and satellite data and model results for the 1600 sites; development of a complete mapping system used for the initial color maps comparing annual and monthly patterns of Normalized Difference Vegetation Index (NDVI), actual evapotranspiration, net primary productivity, gross primary productivity, and net ecosystem production; collection of more biosphere measurements for eventual improvement of the biological models; and development of some initial monthly models for primary productivity, based on satellite data.
Study on activity measurement of Nostoc flagelliforme cells based on color identification
NASA Astrophysics Data System (ADS)
Wang, Yizhong; Su, Jianyu; Liu, Tiegen; Kong, Fanzhi; Jia, Shiru
2008-12-01
In order to measure the activities of Nostoc flagelliforme cells, a new method based on color identification was proposed in this paper. N. flagelliforme cells were colored with fluoreseein diaeetate. Then, an image of colored N. flagelliforme cells was taken, and changed from RGB model to HIS model. Its histogram of hue H was calculated, which was used as the input of a designed BP network. The output of the BP network was the description of measured activity of N. flagelliforme cells. After training, the activity of N. flagelliforme cells was identified by the BP network according to the histogram of H of their colored image. Experiments were conducted with satisfied results to show the feasibility and usefulness of activity measurement of N. flagelliforme cells based on color identification.
The Soil Carbon Paradigm Shift: Triangulating Theories, Measurements, and Models
NASA Astrophysics Data System (ADS)
Blankinship, J. C.; Crow, S. E.; Schimel, J.; Sierra, C. A.; Schaedel, C.; Plante, A. F.; Thompson, A.; Berhe, A. A.; Druhan, J. L.; Heckman, K. A.; Keiluweit, M.; Lawrence, C. R.; Marin-Spiotta, E.; Rasmussen, C.; Wagai, R.; Wieder, W. R.
2016-12-01
Predicting global responses of soil carbon (C) to environmental change remains confounded by a number of paradigms that have emerged from separate approaches. A prevailing paradigm in biogeochemistry interprets soil C as discrete pools based on estimated or measured turnover times (e.g., CENTURY model). An alternative is emerging that envisions the stabilization of soil C in tension between decomposition by microbial agents and protection by physical and chemical mechanisms. We propose an approach to bridge the gap between different paradigms, and to improve soil C forecasting by conceptualizing each paradigm as a triangle composed of three nodes: theory, analytical measurement, and numerical model. Paradigms tend to emerge from what can either be represented in models or measured using analytical instruments. But they gain power when all three elements are integrated in a balanced trinity. Our goal was to compare how theory, measurement, and model fit together in our understanding of soil C to learn from past successes, evaluate the strengths and weaknesses of current paradigms, and guide development of new understanding. We used a case-study approach to analyze each corner of the paradigm-triangle: i) paradigms that have strong theory but are constrained by weak linkages with measurements or models, ii) paradigms with robust models that have weak linkages with theory or measurements, and iii) paradigms with many measurements but little theoretical support or ability to be parameterized in numerical models. We conclude that established models like CENTURY dominate because theory and measurements that underlie the model form strong linkages that previously created a balanced triangle. Evolving paradigms based on physical protection and microbial agency are still struggling to gain traction because the theory is challenging to represent in models. The explicit examination of the strengths of emerging paradigms can, therefore, help refine and accelerate our ability to constrain projections of soil C dynamics.
USDA-ARS?s Scientific Manuscript database
This paper compares three remote sensing-based models for estimating evapotranspiration (ET), namely the Surface Energy Balance System (SEBS), the Two-Source Energy Balance (TSEB) model, and the surface Temperature-Vegetation index Triangle (TVT). The models used as input MODIS/TERRA products and gr...
Experimental validation of numerical simulations on a cerebral aneurysm phantom model
Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique
2012-01-01
The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876
Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel
2015-01-01
Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
Langeveld, J G; Veldkamp, R G; Clemens, F
2005-01-01
Modelling suspended solids transport is a key issue for predicting the pollution load discharged by CSOs. Nonetheless, there is still much debate on the main drivers for suspended solids transport and on the modelling approach to be adopted. Current sewer models provide suspended solids transport models. These models, however, rely upon erosion-deposition criteria developed in fluvial environments, therewith oversimplifying the sewer sediment characteristics. Consequently, the performance of these models is poor from a theoretical point of view. To get an improved understanding of the temporal and spatial variations in suspended solids transport, a measuring network was installed in the sewer system of Loenen in conjunction with a hydraulic measuring network from June through December 2001. During the measuring period, 15 storm events rendered high-quality data on both the hydraulics and the turbidity. For each storm event, a hydrodynamic model was calibrated using the Clemens' method. The conclusion of the paper is that modelling of suspended solids transport has been and will be one of the challenges in the field of urban drainage modelling. A direct relation of either shear stress or flow velocity with turbidity could not be found, likely because of the time varying characteristics of the suspended solids.
Model-based RSA of a femoral hip stem using surface and geometrical shape models.
Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M
2006-07-01
Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Martens, Astrid L; Bolte, John F B; Beekhuizen, Johan; Kromhout, Hans; Smid, Tjabe; Vermeulen, Roel C H
2015-10-01
Epidemiological studies on the potential health effects of RF-EMF from mobile phone base stations require efficient and accurate exposure assessment methods. Previous studies have demonstrated that the 3D geospatial model NISMap is able to rank locations by indoor and outdoor RF-EMF exposure levels. This study extends on previous work by evaluating the suitability of using NISMap to estimate indoor RF-EMF exposure levels at home as a proxy for personal exposure to RF-EMF from mobile phone base stations. For 93 individuals in the Netherlands we measured personal exposure to RF-EMF from mobile phone base stations during a 24h period using an EME-SPY 121 exposimeter. Each individual kept a diary from which we extracted the time spent at home and in the bedroom. We used NISMap to model exposure at the home address of the participant (at bedroom height). We then compared model predictions with measurements for the 24h period, when at home, and in the bedroom by the Spearman correlation coefficient (rsp) and by calculating specificity and sensitivity using the 90th percentile of the exposure distribution as a cutpoint for high exposure. We found a low to moderate rsp of 0.36 for the 24h period, 0.51 for measurements at home, and 0.41 for measurements in the bedroom. The specificity was high (0.9) but with a low sensitivity (0.3). These results indicate that a meaningful ranking of personal RF-EMF can be achieved, even though the correlation between model predictions and 24h personal RF-EMF measurements is lower than with at home measurements. However, the use of at home RF-EMF field predictions from mobile phone base stations in epidemiological studies leads to significant exposure misclassification that will result in a loss of statistical power to detect health effects. Copyright © 2015 Elsevier Inc. All rights reserved.
A Laser-Based Measuring System for Online Quality Control of Car Engine Block.
Li, Xing-Qiang; Wang, Zhong; Fu, Lu-Hua
2016-11-08
For online quality control of car engine production, pneumatic measurement instrument plays an unshakeable role in measuring diameters inside engine block because of its portability and high-accuracy. To the limitation of its measuring principle, however, the working space between the pneumatic device and measured surface is too small to require manual operation. This lowers the measuring efficiency and becomes an obstacle to perform automatic measurement. In this article, a high-speed, automatic measuring system is proposed to take the place of pneumatic devices by using a laser-based measuring unit. The measuring unit is considered as a set of several measuring modules, where each of them acts like a single bore gauge and is made of four laser triangulation sensors (LTSs), which are installed on different positions and in opposite directions. The spatial relationship among these LTSs was calibrated before measurements. Sampling points from measured shaft holes can be collected by the measuring unit. A unified mathematical model was established for both calibration and measurement. Based on the established model, the relative pose between the measuring unit and measured workpiece does not impact the measuring accuracy. This frees the measuring unit from accurate positioning or adjustment, and makes it possible to realize fast and automatic measurement. The proposed system and method were finally validated by experiments.
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
USDA-ARS?s Scientific Manuscript database
Process-based modeling provides detailed spatial and temporal information of the soil environment in the shallow seedling recruitment zone across field topography where measurements of soil temperature and water may not sufficiently describe the zone. Hourly temperature and water profiles within the...
Development of optical diagnostics for performance evaluation of arcjet thrusters
NASA Technical Reports Server (NTRS)
Cappelli, Mark A.
1995-01-01
Laser and optical emission-based measurements have been developed and implemented for use on low-power hydrogen arcjet thrusters and xenon-propelled electric thrusters. In the case of low power hydrogen arcjets, these laser induce fluorescence measurements constitute the first complete set of data that characterize the velocity and temperature field of such a device. The research performed under the auspices of this NASA grant includes laser-based measurements of atomic hydrogen velocity and translational temperature, ultraviolet absorption measurements of ground state atomic hydrogen, Raman scattering measurements of the electronic ground state of molecular hydrogen, and optical emission based measurements of electronically excited atomic hydrogen, electron number density, and electron temperature. In addition, we have developed a collisional-radiative model of atomic hydrogen for use in conjunction with magnetohydrodynamic models to predict the plasma radiative spectrum, and near-electrode plasma models to better understand current transfer from the electrodes to the plasma. In the final year of the grant, a new program aimed at developing diagnostics for xenon plasma thrusters was initiated, and results on the use of diode lasers for interrogating Hall accelerator plasmas has been presented at recent conferences.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, J; Heins, D; Zhang, R
Purpose: To model the magnetic port in the temporary breast tissue expanders and to improve accuracy of dose calculation in Pinnacle, a commercial treatment planning system (TPS). Methods: A magnetic port in the tissue expander was modeled with a radiological measurement-basis; we have determined the dimension and the density of the model by film images and ion chamber measurement under the magnetic port, respectively. The model was then evaluated for various field sizes and photon energies by comparing depth dose values calculated by TPS (using our new model) and ion chamber measurement in a water tank. Also, the model wasmore » further evaluated by using a simplified anthropomorphic phantom with realistic geometry by placing thermoluminescent dosimeters (TLD)s around the magnetic port. Dose perturbations in a real patient’s treatment plan from the new model and a current clinical model, which is based on the subjective contouring created by the dosimetrist, were also compared. Results: Dose calculations based on our model showed less than 1% difference from ion chamber measurements for various field sizes and energies under the magnetic port when the magnetic port was placed parallel to the phantom surface. When it was placed perpendicular to the phantom surface, the maximum difference was 3.5%, while average differences were less than 3.1% for all cases. For the simplified anthropomorphic phantom, the calculated point doses agreed with TLD measurements within 5.2%. By comparing with the current model which is being used in clinic by TPS, it was found that current clinical model overestimates the effect from the magnetic port. Conclusion: Our new model showed good agreement with measurement for all cases. It could potentially improve the accuracy of dose delivery to the breast cancer patients.« less
NASA Astrophysics Data System (ADS)
Duan, Zheng; Bastiaanssen, W. G. M.
2017-02-01
The heat storage changes (Q t) can be a significant component of the energy balance in lakes, and it is important to account for Q t for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Q t has been often neglected in many studies due to the lack of required water temperature data. A simple hysteresis model (Q t = a*Rn + b + c* dRn/dt) has been demonstrated to reasonably estimate Q t from the readily available net all wave radiation (Rn) and three locally calibrated coefficients (a-c) for lakes and reservoirs. As a follow-up study, we evaluated whether this hysteresis model could enable energy balance-based evaporation models to yield good evaporation estimates. The representative monthly evaporation data were compiled from published literature and used as ground-truth to evaluate three energy balance-based evaporation models for five lakes. The three models in different complexity are De Bruin-Keijman (DK), Penman, and a new model referred to as Duan-Bastiaanssen (DB). All three models require Q t as input. Each model was run in three scenarios differing in the input Q t (S1: measured Q t; S2: modelled Q t from the hysteresis model; S3: neglecting Q t) to evaluate the impact of Q t on the modelled evaporation. Evaluation showed that the modelled Q t agreed well with measured counterparts for all five lakes. It was confirmed that the hysteresis model with locally calibrated coefficients can predict Q t with good accuracy for the same lake. Using modelled Q t as inputs all three evaporation models yielded comparably good monthly evaporation to those using measured Q t as inputs and significantly better than those neglecting Q t for the five lakes. The DK model requiring minimum data generally performed the best, followed by the Penman and DB model. This study demonstrated that once three coefficients are locally calibrated using historical data the simple hysteresis model can offer reasonable Q t to force energy balance-based evaporation models to improve evaporation modelling at monthly timescales for conditions and long-term periods when measured Q t are not available. We call on scientific community to further test and refine the hysteresis model in more lakes in different geographic locations and environments.
An image-based method to measure all-terrain vehicle dimensions for engineering safety purposes.
Jennissen, Charles A; Miller, Nathan S; Tang, Kaiyang; Denning, Gerene M
2014-04-01
All-terrain vehicle (ATV) crashes are a serious public health and safety concern. Engineering approaches that address ATV injury prevention are critically needed. Avenues to pursue include evidence-based seat design that decreases risky behaviours, such as carrying passengers and operation of adult-size vehicles by children. The goal of this study was to create and validate an image-based method to measure ATV seat length and placement. Publicly available ATV images were downloaded. Adobe Photoshop was then used to generate a vertical grid through the centre of the vehicle, to define the grid scale using the manufacturer's reported wheelbase, and to determine seat length and placement relative to the front and rear axles using this scale. Images that yielded a difference greater than 5% between the calculated and the manufacturer's reported ATV lengths were excluded from further analysis. For the 77 images that met inclusion criteria, the mean±SD for the difference in calculated versus reported vehicle length was 1.8%±1.2%. The Pearson correlation coefficient for comparing image-based seat lengths determined by two independent measurers (20 models) and image-based lengths versus lengths measured at dealerships (12 models) were 0.95 and 0.96, respectively. The image-based method provides accurate and reproducible results for determining ATV measurements, including seat length and placement. This method greatly expands the number of ATV models that can be studied, and may be generalisable to other motor vehicle types. These measurements can be used to guide engineering approaches that improve ATV safety design.
Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer
2014-01-01
National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...
Modeling the Performance of Direct-Detection Doppler Lidar Systems in Real Atmospheres
NASA Technical Reports Server (NTRS)
McGill, Matthew J.; Hart, William D.; McKay, Jack A.; Spinhirne, James D.
1999-01-01
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems has assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar systems: the double-edge and the multi-channel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only about 10-20% compared to nighttime performance, provided a proper solar filter is included in the instrument design.
NASA Astrophysics Data System (ADS)
Wei, Yu; Chen, Wang; Lin, Yu
2013-05-01
Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D
1999-10-20
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
NASA Astrophysics Data System (ADS)
Di, Nur Faraidah Muhammad; Satari, Siti Zanariah
2017-05-01
Outlier detection in linear data sets has been done vigorously but only a small amount of work has been done for outlier detection in circular data. In this study, we proposed multiple outliers detection in circular regression models based on the clustering algorithm. Clustering technique basically utilizes distance measure to define distance between various data points. Here, we introduce the similarity distance based on Euclidean distance for circular model and obtain a cluster tree using the single linkage clustering algorithm. Then, a stopping rule for the cluster tree based on the mean direction and circular standard deviation of the tree height is proposed. We classify the cluster group that exceeds the stopping rule as potential outlier. Our aim is to demonstrate the effectiveness of proposed algorithms with the similarity distances in detecting the outliers. It is found that the proposed methods are performed well and applicable for circular regression model.
NASA Astrophysics Data System (ADS)
Escalante, George
2017-05-01
Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.
The relative effectiveness of computer-based and traditional resources for education in anatomy.
Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R; Wainman, Bruce
2013-01-01
There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic model. We conducted a controlled trial in which 60 undergraduate students had ten minutes to study the names of 20 different pelvic structures. The outcome measure was a 25 item short answer test consisting of 15 nominal and 10 functional questions, based on a cadaveric pelvis. All subjects also took a brief mental rotations test (MRT) as a measure of spatial ability, used as a covariate in the analysis. Data were analyzed with repeated measures ANOVA. The group learning from the model performed significantly better than the other two groups on the nominal questions (Model 67%; KV 40%; VR 41%, Effect size 1.19 and 1.29, respectively). There was no difference between the KV and VR groups. There was no difference between the groups on the functional questions (Model 28%; KV, 23%, VR 25%). Computer-based learning resources appear to have significant disadvantages compared to traditional specimens in learning nominal anatomy. Consistent with previous research, virtual reality shows no advantage over static presentation of key views. © 2013 American Association of Anatomists.
NASA Astrophysics Data System (ADS)
Rosyidah, T. H.; Firman, H.; Rusyati, L.
2017-02-01
This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.
NASA Astrophysics Data System (ADS)
Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Strohbach, Jens; Förstner, Jochen; Potthast, Roland
2017-12-01
A new backscatter lidar forward operator was developed which is based on the distinct calculation of the aerosols' backscatter and extinction properties. The forward operator was adapted to the COSMO-ART ash dispersion simulation of the Eyjafjallajökull eruption in 2010. While the particle number concentration was provided as a model output variable, the scattering properties of each individual particle type were determined by dedicated scattering calculations. Sensitivity studies were performed to estimate the uncertainties related to the assumed particle properties. Scattering calculations for several types of non-spherical particles required the usage of T-matrix routines. Due to the distinct calculation of the backscatter and extinction properties of the models' volcanic ash size classes, the sensitivity studies could be made for each size class individually, which is not the case for forward models based on a fixed lidar ratio. Finally, the forward-modeled lidar profiles have been compared to automated ceilometer lidar (ACL) measurements both qualitatively and quantitatively while the attenuated backscatter coefficient was chosen as a suitable physical quantity. As the ACL measurements were not calibrated automatically, their calibration had to be performed using satellite lidar and ground-based Raman lidar measurements. A slight overestimation of the model-predicted volcanic ash number density was observed. Major requirements for future data assimilation of data from ACL have been identified, namely, the availability of calibrated lidar measurement data, a scattering database for atmospheric aerosols, a better representation and coverage of aerosols by the ash dispersion model, and more investigation in backscatter lidar forward operators which calculate the backscatter coefficient directly for each individual aerosol type. The introduced forward operator offers the flexibility to be adapted to a multitude of model systems and measurement setups.
NASA Astrophysics Data System (ADS)
Faramarzi, Farhad; Mansouri, Hamid; Farsangi, Mohammad Ali Ebrahimi
2014-07-01
The environmental effects of blasting must be controlled in order to comply with regulatory limits. Because of safety concerns and risk of damage to infrastructures, equipment, and property, and also having a good fragmentation, flyrock control is crucial in blasting operations. If measures to decrease flyrock are taken, then the flyrock distance would be limited, and, in return, the risk of damage can be reduced or eliminated. This paper deals with modeling the level of risk associated with flyrock and, also, flyrock distance prediction based on the rock engineering systems (RES) methodology. In the proposed models, 13 effective parameters on flyrock due to blasting are considered as inputs, and the flyrock distance and associated level of risks as outputs. In selecting input data, the simplicity of measuring input data was taken into account as well. The data for 47 blasts, carried out at the Sungun copper mine, western Iran, were used to predict the level of risk and flyrock distance corresponding to each blast. The obtained results showed that, for the 47 blasts carried out at the Sungun copper mine, the level of estimated risks are mostly in accordance with the measured flyrock distances. Furthermore, a comparison was made between the results of the flyrock distance predictive RES-based model, the multivariate regression analysis model (MVRM), and, also, the dimensional analysis model. For the RES-based model, R 2 and root mean square error (RMSE) are equal to 0.86 and 10.01, respectively, whereas for the MVRM and dimensional analysis, R 2 and RMSE are equal to (0.84 and 12.20) and (0.76 and 13.75), respectively. These achievements confirm the better performance of the RES-based model over the other proposed models.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Achleitner, S; De Toffol, S; Engelhard, C; Rauch, W
2005-01-01
In river stretches being subjected to flow regulation, usually for the purpose of energy production (e.g. Hydropower) or flood protection (river barrage), a special measure can be taken against the effect of combined sewer overflows (CSOs). The basic idea is the temporal increase of the river base flow (during storm weather) as an in-stream measure for mitigation of CSO spilling. The focus is the mitigation of the negative effect of acute pollution of substances. The measure developed can be seen as an application of the classic real time control (RTC) concept onto the river system. Upstream gate operation is to be based on real time monitoring and forecasting of precipitation. The main objective is the development of a model based predictive control system for the gate operation, by modelling of the overall wastewater system (incl. the receiving water). The main emphasis is put on the operational strategy and the appropriate short-term forecast of spilling events. The potential of the measure is tested for the application of the operational strategy and its ecological and economic feasibility. The implementation of such an in-stream measure into the hydropower's operational scheme is unique. Advantages are (a) the additional in-stream dilution of acute pollutants entering the receiving water and (b) the resulting minimization of the required CSO storage volume.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...
Comparison of Reliability Measures under Factor Analysis and Item Response Theory
ERIC Educational Resources Information Center
Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng
2012-01-01
Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Sloto, Ronald A.
2008-01-01
The Pocono Creek watershed drains 46.5 square miles in eastern Monroe County, Pa. Between 2000 and 2020, the population of Monroe County is expected to increase by 70 percent, which will result in substantial changes in land-use patterns. An evaluation of the effect of reduced recharge from land-use changes and additional ground-water withdrawals on stream base flow was done by the U.S. Geological Survey (USGS) in cooperation with the U.S. Environmental Protection Agency (USEPA) and the Delaware River Basin Commission as part of the USEPA?s Framework for Sustainable Watershed Management Initiative. Two models were used. A Soil and Water Assessment Tool (SWAT) model developed by the USEPA provided areal recharge values for 2000 land use and projected full buildout land use. The USGS MODFLOW-2000 ground-water-flow model was used to estimate the effect of reduced recharge from changes in land use and additional ground-water withdrawals on stream base flow. This report describes the ground-water-flow-model simulations. The Pocono Creek watershed is underlain by sedimentary rock of Devonian age, which is overlain by a veneer of glacial deposits. All water-supply wells are cased into and derive water from the bedrock. In the ground-water-flow model, the surficial geologic units were grouped into six categories: (1) moraine deposits, (2) stratified drift, (3) lake deposits, (4) outwash, (5) swamp deposits, and (6) undifferentiated deposits. The unconsolidated surficial deposits are not used as a source of water. The ground-water and surface-water systems are well connected in the Pocono Creek watershed. Base flow measured on October 13, 2004, at 27 sites for model calibration showed that streams gained water between all sites measured except in the lower reach of Pocono Creek. The ground-water-flow model included the entire Pocono Creek watershed. Horizontally, the modeled area was divided into a 53 by 155 cell grid with 6,060 active cells. Vertically, the modeled area was discretized into four layers. Layers 1 and 2 represented the unconsolidated surficial deposits where they are present and bedrock where the surficial deposits are absent. Layer 3 represented shallow bedrock and was 200 ft (feet) thick. Layer 4 represented deep bedrock and was 300 ft thick. A total of 873 cells representing streams were assigned to layer 1. Recharge rates for model calibration were provided by the USEPA SWAT model for 2000 land-use conditions. Recharge rates for 2000 for the 29 subwatersheds in the SWAT model ranged from 6.11 to 22.66 inches per year. Because the ground-water-flow model was calibrated to base-flow data collected on October 13, 2004, the 2000 recharge rates were multiplied by 1.18 so the volume of recharge was equal to the volume of streamflow measured at the mouth of Pocono Creek. During model calibration, adjustments were made to aquifer hydraulic conductivity and streambed conductance. Simulated base flows and hydraulic heads were compared to measured base flows and hydraulic heads using the root mean squared error (RMSE) between measured and simulated values. The RMSE of the calibrated model for base flow was 4.7 cubic feet per second for 27 locations, and the RMSE for hydraulic heads for 15 locations was 35 ft. The USEPA SWAT model was used to provide areal recharge values for 2000 and full buildout land-use conditions. The change in recharge ranged from an increase of 37.8 percent to a decrease of 60.8 percent. The ground-water-flow model was used to simulate base flow for 2000 and full buildout land-use conditions using steady-state simulations. The decrease in simulated base flow ranged from 3.8 to 63 percent at the streamflow-measurement sites. Simulated base flow at streamflow-gaging station Pocono Creek above Wigwam Run near Stroudsburg, Pa. (01441495), decreased 25 percent. This is in general agreement with the SWAT model, which estimated a 30.6-percent loss in base flow at the streamflow-gaging station.
NASA Technical Reports Server (NTRS)
Dougherty, N. S.; Johnson, S. L.
1993-01-01
Multiple rocket exhaust plume interactions at high altitudes can produce base flow recirculation with attendant alteration of the base pressure coefficient and increased base heating. A search for a good wind tunnel benchmark problem to check grid clustering technique and turbulence modeling turned up the experiment done at AEDC in 1961 by Goethert and Matz on a 4.25-in. diameter domed missile base model with four rocket nozzles. This wind tunnel model with varied external bleed air flow for the base flow wake produced measured p/p(sub ref) at the center of the base as high as 3.3 due to plume flow recirculation back onto the base. At that time in 1961, relatively inexpensive experimentation with air at gamma = 1.4 and nozzle A(sub e)/A of 10.6 and theta(sub n) = 7.55 deg with P(sub c) = 155 psia simulated a LO2/LH2 rocket exhaust plume with gamma = 1.20, A(sub e)/A of 78 and P(sub c) about 1,000 psia. An array of base pressure taps on the aft dome gave a clear measurement of the plume recirculation effects at p(infinity) = 4.76 psfa corresponding to 145,000 ft altitude. Our CFD computations of the flow field with direct comparison of computed-versus-measured base pressure distribution (across the dome) provide detailed information on velocities and particle traces as well eddy viscosity in the base and nozzle region. The solution was obtained using a six-zone mesh with 284,000 grid points for one quadrant taking advantage of symmetry. Results are compared using a zero-equation algebraic and a one-equation pointwise R(sub t) turbulence model (work in progress). Good agreement with the experimental pressure data was obtained with both; and this benchmark showed the importance of: (1) proper grid clustering and (2) proper choice of turbulence modeling for rocket plume problems/recirculation at high altitude.
Global Monthly CO2 Flux Inversion Based on Results of Terrestrial Ecosystem Modeling
NASA Astrophysics Data System (ADS)
Deng, F.; Chen, J.; Peters, W.; Krol, M.
2008-12-01
Most of our understanding of the sources and sinks of atmospheric CO2 has come from inverse studies of atmospheric CO2 concentration measurements. However, the number of currently available observation stations and our ability to simulate the diurnal planetary boundary layer evolution over continental regions essentially limit the number of regions that can be reliably inverted globally, especially over continental areas. In order to overcome these restrictions, a nested inverse modeling system was developed based on the Bayesian principle for estimating carbon fluxes of 30 regions in North America and 20 regions for the rest of the globe. Inverse modeling was conducted in monthly steps using CO2 concentration measurements of 5 years (2000 - 2005) with the following two models: (a) An atmospheric transport model (TM5) is used to generate the transport matrix where the diurnal variation n of atmospheric CO2 concentration is considered to enhance the use of the afternoon-hour average CO2 concentration measurements over the continental sites. (b) A process-based terrestrial ecosystem model (BEPS) is used to produce hourly step carbon fluxes, which could minimize the limitation due to our inability to solve the inverse problem in a high resolution, as the background of our inversion. We will present our recent results achieved through a combination of the bottom-up modeling with BEPS and the top-down modeling based on TM5 driven by offline meteorological fields generated by the European Centre for Medium Range Weather Forecast (ECMFW).
Development and testing of meteorology and air dispersion models for Mexico City
NASA Astrophysics Data System (ADS)
Williams, M. D.; Brown, M. J.; Cruz, X.; Sosa, G.; Streit, G.
Los Alamos National Laboratory and Instituto Mexicano del Petróleo are completing a joint study of options for improving air quality in Mexico City. We have modified a three-dimensional, prognostic, higher-order turbulence model for atmospheric circulation (HOTMAC) and a Monte Carlo dispersion and transport model (RAPTAD) to treat domains that include an urbanized area. We used the meteorological model to drive models which describe the photochemistry and air transport and dispersion. The photochemistry modeling is described in a separate paper. We tested the model against routine measurements and those of a major field program. During the field program, measurements included: (1) lidar measurements of aerosol transport and dispersion, (2) aircraft measurements of winds, turbulence, and chemical species aloft, (3) aircraft measurements of skin temperatures, and (4) Tethersonde measurements of winds and ozone. We modified the meteorological model to include provisions for time-varying synoptic-scale winds, adjustments for local wind effects, and detailed surface-coverage descriptions. We developed a new method to define mixing-layer heights based on model outputs. The meteorology and dispersion models were able to provide reasonable representations of the measurements and to define the sources of some of the major uncertainties in the model-measurement comparisons.
A fuel-based approach to estimating motor vehicle exhaust emissions
NASA Astrophysics Data System (ADS)
Singer, Brett Craig
Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.
Setting up a hydrological model based on global data for the Ayeyarwady basin in Myanmar
NASA Astrophysics Data System (ADS)
ten Velden, Corine; Sloff, Kees; Nauta, Tjitte
2017-04-01
The use of global datasets in local hydrological modelling can be of great value. It opens up the possibility to include data for areas where local data is not or only sparsely available. In hydrological modelling the existence of both static physical data such as elevation and land use, and dynamic meteorological data such as precipitation and temperature, is essential for setting up a hydrological model, but often such data is difficult to obtain at the local level. For the Ayeyarwady catchment in Myanmar a distributed hydrological model (Wflow: https://github.com/openstreams/wflow) was set up with only global datasets, as part of a water resources study. Myanmar is an emerging economy, which has only recently become more receptive to foreign influences. It has a very limited hydrometeorological measurement network, with large spatial and temporal gaps, and data that are of uncertain quality and difficult to obtain. The hydrological model was thus set up based on resampled versions of the SRTM digital elevation model, the GlobCover land cover dataset and the HWSD soil dataset. Three global meteorological datasets were assessed and compared for use in the hydrological model: TRMM, WFDEI and MSWEP. The meteorological datasets were assessed based on their conformity with several precipitation station measurements, and the overall model performance was assessed by calculating the NSE and RVE based on discharge measurements of several gauging stations. The model was run for the period 1979-2012 on a daily time step, and the results show an acceptable applicability of the used global datasets in the hydrological model. The WFDEI forcing dataset gave the best results, with a NSE of 0.55 at the outlet of the model and a RVE of 8.5%, calculated over the calibration period 2006-2012. As a general trend the modelled discharge at the upstream stations tends to be underestimated, and at the downstream stations slightly overestimated. The quality of the discharge measurements that form the basis for the performance calculations is uncertain; data analysis suggests that rating curves are not frequently updated. The modelling results are not perfect and there is ample room for improvement, but the results are reasonable given the notion that setting up a hydrological model for this area would not have been possible without the use of global datasets due to the lack of available local data. The resulting hydrological model then enabled the set-up of the RIBASIM water allocation model for the Ayeyarwady basin in order to assess its water resources. The study discussed here is a first step; ideally this is followed up by a more thorough calibration and validation with the limited local measurements available, e.g. a precipitation correction based on the available rainfall measurements, to ensure the integration of global and local data.
NASA Astrophysics Data System (ADS)
Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan
2018-02-01
Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.
Model based defect characterization in composites
NASA Astrophysics Data System (ADS)
Roberts, R.; Holland, S.
2017-02-01
Work is reported on model-based defect characterization in CFRP composites. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing multi-ply impact-induced delamination, with application in this paper focusing on ultrasound. A companion paper in these proceedings summarizes corresponding activity in thermography. Inversion of ultrasound data is demonstrated showing the quantitative extraction of damage properties.
NASA Technical Reports Server (NTRS)
Wiscombe, W.
1999-01-01
The purpose of this paper is discuss the concept of fractal dimension; multifractal statistics as an extension of this; the use of simple multifractal statistics (power spectrum, structure function) to characterize cloud liquid water data; and to understand the use of multifractal cloud liquid water models based on real data as input to Monte Carlo radiation models of shortwave radiation transfer in 3D clouds, and the consequences of this in two areas: the design of aircraft field programs to measure cloud absorptance; and the explanation of the famous "Landsat scale break" in measured radiance.
Physically based reflectance model utilizing polarization measurement.
Nakano, Takayuki; Tamagawa, Yasuhisa
2005-05-20
A surface bidirectional reflectance distribution function (BRDF) depends on both the optical properties of the material and the microstructure of the surface and appears as combination of these factors. We propose a method for modeling the BRDF based on a separate optical-property (refractive-index) estimation by polarization measurement. Because the BRDF and the refractive index for precisely the same place can be determined, errors cased by individual difference or spatial dependence can be eliminated. Our BRDF model treats the surface as an aggregation of microfacets, and the diffractive effect is negligible because of randomness. An example model of a painted aluminum plate is presented.
Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques
NASA Astrophysics Data System (ADS)
Spencer, E.; Rao, A.; Horton, W.; Mays, L.
2008-12-01
ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
2018-03-20
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Sean P. Healey; Paul L. Patterson; Sassan Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman; Gretchen G. Moisen
2012-01-01
Light Detection and Ranging (LiDAR) returns from the spaceborne Geoscience Laser Altimeter (GLAS) sensor may offer an alternative to solely field-based forest biomass sampling. Such an approach would rely upon model-based inference, which can account for the uncertainty associated with using modeled, instead of field-collected, measurements. Model-based methods have...
Integration of Web-based and PC-based clinical research databases.
Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M
2004-01-01
We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.
Modeling and forecasting US presidential election using learning algorithms
NASA Astrophysics Data System (ADS)
Zolghadr, Mohammad; Niaki, Seyed Armin Akhavan; Niaki, S. T. A.
2017-09-01
The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president's approval rate, and others are considered in a stepwise regression to identify significant variables. The president's approval rate is identified as the most significant variable, based on which eight other variables are identified and considered in the model development. Preprocessing methods are applied to prepare the data for the learning algorithms. The proposed procedure significantly increases the accuracy of the model by 50%. The learning algorithms (ANN and SVR) proved to be superior to linear regression based on each method's calculated performance measures. The SVR model is identified as the most accurate model among the other models as this model successfully predicted the outcome of the election in the last three elections (2004, 2008, and 2012). The proposed approach significantly increases the accuracy of the forecast.
Photogrammetry-Based Automated Measurements for Tooth Shape and Occlusion Analysis
NASA Astrophysics Data System (ADS)
Knyaz, V. A.; Gaboutchian, A. V.
2016-06-01
Tooth measurements (odontometry) are performed for various scientific and practical applications, including dentistry. Present-day techniques are being increasingly based on 3D model use that provides wider prospects in comparison to measurements on real objects: teeth or their plaster copies. The main advantages emerge through application of new measurement methods which provide the needed degree of non-invasiveness, precision, convenience and details. Tooth measurements have been always regarded as a time-consuming research, even more so with use of new methods due to their wider opportunities. This is where automation becomes essential for further development and implication of measurement techniques. In our research automation in obtaining 3D models and automation of measurements provided essential data that was analysed to suggest recommendations for tooth preparation - one of the most responsible clinical procedures in prosthetic dentistry - within a comparatively short period of time. The original photogrammetric 3D reconstruction system allows to generate 3D models of dental arches, reproduce their closure, or occlusion, and to perform a set of standard measurement in automated mode.
A Theory of the Measurement of Knowledge Content, Access, and Learning.
ERIC Educational Resources Information Center
Pirolli, Peter; Wilson, Mark
1998-01-01
An approach to the measurement of knowledge content, knowledge access, and knowledge learning is developed. First a theoretical view of cognition is described, and then a class of measurement models, based on Rasch modeling, is presented. Knowledge access and content are viewed as determining the observable actions selected by an agent to achieve…
ERIC Educational Resources Information Center
Wright, Courtney A.; Kaiser, Ann P.
2017-01-01
Measuring treatment fidelity is an essential step in research designed to increase the use of evidence-based practices. For parent-implemented communication interventions, measuring the implementation of the teaching and coaching provided to the parents is as critical as measuring the parents' delivery of the intervention to the child. Both levels…
ERIC Educational Resources Information Center
Kahraman, Nilufer; Brown, Crystal B.
2015-01-01
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
Measurements in Quantum Mechanics and von NEUMANN's Model
NASA Astrophysics Data System (ADS)
Mello, Pier A.; Johansen, Lars M.
2010-12-01
Many textbooks on Quantum Mechanics are not very precise as to the meaning of making a measurement: as a consequence, they frequently make assertions which are not based on a dynamical description of the measurement process. A model proposed by von Neumann allows a dynamical description of measurement in Quantum Mechanics, including the measuring instrument in the formalism. In this article we apply von Neumann's model to illustrate the measurement of an observable by means of a measuring instrument and show how various results, which are sometimens postulated without a dynamical basis, actually emerge. We also investigate the more complex, intriguing and fundamental problem of two successive measurements in Quantum Mechanics, extending von Neumann's model to two measuring instruments. We present a description which allows obtaining, in a unified way, various results that have been given in the literature.
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Optimal Estimation with Two Process Models and No Measurements
2015-08-01
models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an
NASA Astrophysics Data System (ADS)
Haer, Toon; Botzen, Wouter; de Moel, Hans; Aerts, Jeroen
2015-04-01
In the period 1998-2009, floods triggered roughly 52 billion euro in insured economic losses making floods the most costly natural hazard in Europe. Climate change and socio/economic trends are expected to further aggrevate floods losses in many regions. Research shows that flood risk can be significantly reduced if households install protective measures, and that the implementation of such measures can be stimulated through flood insurance schemes and subsidies. However, the effectiveness of such incentives to stimulate implementation of loss-reducing measures greatly depends on the decision process of individuals and is hardly studied. In our study, we developed an Agent-Based Model that integrates flood damage models, insurance mechanisms, subsidies, and household behaviour models to assess the effectiveness of different economic tools on stimulating households to invest in loss-reducing measures. Since the effectiveness depends on the decision making process of individuals, the study compares different household decision models ranging from standard economic models, to economic models for decision making under risk, to more complex decision models integrating economic models and risk perceptions, opinion dynamics, and the influence of flood experience. The results show the effectiveness of incentives to stimulate investment in loss-reducing measures for different household behavior types, while assuming climate change scenarios. It shows how complex decision models can better reproduce observed real-world behaviour compared to traditional economic models. Furthermore, since flood events are included in the simulations, the results provide an analysis of the dynamics in insured and uninsured losses for households, the costs of reducing risk by implementing loss-reducing measures, the capacity of the insurance market, and the cost of government subsidies under different scenarios. The model has been applied to the City of Rotterdam in The Netherlands.
Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.
Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S
2008-10-01
Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.
Test and Evaluation of TRUST: Tools for Recognizing Useful Signals of Trustworthiness
2016-04-01
guaranteed, social exchange requires trust—the belief that others will follow through on their obligations. The model includes the beliefs that...current reflection could be measured based on properties of the skin, and (2) skin conductance response (SCR), where the fastest could be measured and...SEM prediction (H4d). The results of the LF HRV signals indicate the SEM model predicts distrust base on the experimental SS paradigm and SEM
z'-BAND GROUND-BASED DETECTION OF THE SECONDARY ECLIPSE OF WASP-19b
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burton, J. R.; Watson, C. A.; Pollacco, D.
2012-08-01
We present the ground-based detection of the secondary eclipse of the transiting exoplanet WASP-19b. The observations were made in the Sloan z' band using the ULTRACAM triple-beam CCD camera mounted on the New Technology Telescope. The measurement shows a 0.088% {+-} 0.019% eclipse depth, matching previous predictions based on H- and K-band measurements. We discuss in detail our approach to the removal of errors arising due to systematics in the data set, in addition to fitting a model transit to our data. This fit returns an eclipse center, T{sub 0}, of 2455578.7676 HJD, consistent with a circular orbit. Our measurementmore » of the secondary eclipse depth is also compared to model atmospheres of WASP-19b and is found to be consistent with previous measurements at longer wavelengths for the model atmospheres we investigated.« less
Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm
Sun, Baoliang; Jiang, Chunlan; Li, Ming
2016-01-01
An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271
Lawry, Tristan J; Wilt, Kyle R; Scarton, Henry A; Saulnier, Gary J
2012-11-01
The linear propagation of electromagnetic and dilatational waves through a sandwiched plate piezoelectric transformer (SPPT)-based acoustic-electric transmission channel is modeled using the transfer matrix method with mixed-domain two-port ABCD parameters. This SPPT structure is of great interest because it has been explored in recent years as a mechanism for wireless transmission of electrical signals through solid metallic barriers using ultrasound. The model we present is developed to allow for accurate channel performance prediction while greatly reducing the computational complexity associated with 2- and 3-dimensional finite element analysis. As a result, the model primarily considers 1-dimensional wave propagation; however, approximate solutions for higher-dimensional phenomena (e.g., diffraction in the SPPT's metallic core layer) are also incorporated. The model is then assessed by comparing it to the measured wideband frequency response of a physical SPPT-based channel from our previous work. Very strong agreement between the modeled and measured data is observed, confirming the accuracy and utility of the presented model.
Ben-David, Avishai; Embury, Janon F; Davidson, Charles E
2006-09-10
A comprehensive analytical radiative transfer model for isothermal aerosols and vapors for passive infrared remote sensing applications (ground-based and airborne sensors) has been developed. The theoretical model illustrates the qualitative difference between an aerosol cloud and a chemical vapor cloud. The model is based on two and two/four stream approximations and includes thermal emission-absorption by the aerosols; scattering of diffused sky radiances incident from all sides on the aerosols (downwelling, upwelling, left, and right); and scattering of aerosol thermal emission. The model uses moderate resolution transmittance ambient atmospheric radiances as boundary conditions and provides analytical expressions for the information on the aerosol cloud that is contained in remote sensing measurements by using thermal contrasts between the aerosols and diffused sky radiances. Simulated measurements of a ground-based sensor viewing Bacillus subtilis var. niger bioaerosols and kaolin aerosols are given and discussed to illustrate the differences between a vapor-only model (i.e., only emission-absorption effects) and a complete model that adds aerosol scattering effects.
Strategic Planning for Drought Mitigation Under Climate Change
NASA Astrophysics Data System (ADS)
Cai, X.; Zeng, R.; Valocchi, A. J.; Song, J.
2012-12-01
Droughts continue to be a major natural hazard and mounting evidence of global warming confronts society with a pressing question: Will climate change aggravate the risk of drought at local scale? It is important to explore what additional risk will be imposed by climate change and what level of strategic measures should be undertaken now to avoid vulnerable situations in the future, given that tactical measures may not avoid large damage. This study addresses the following key questions on strategic planning for drought mitigation under climate change: What combination of strategic and tactical measures will move the societal system response from a vulnerable situation to a resilient one with minimum cost? Are current infrastructures and their operation enough to mitigate the damage of future drought, or do we need in-advance infrastructure expansion for future drought preparedness? To address these questions, this study presents a decision support framework based on a coupled simulation and optimization model. A quasi-physically based watershed model is established for the Frenchman Creek Basin (FCB), part of the Republic River Basin, where groundwater based irrigation plays a significant role in agriculture production and local hydrological cycle. The physical model is used to train a statistical surrogate model, which predicts the watershed responses under future climate conditions. The statistical model replaces the complex physical model in the simulation-optimization framework, which makes the models computationally tractable. Decisions for drought preparedness include traditional short-term tactical measures (e.g. facility operation) and long-term or in-advance strategic measures, which require capital investment. A scenario based three-stage stochastic optimization model assesses the roles of strategic measures and tactical measures in drought preparedness and mitigation. Two benchmark climate prediction horizons, 2040s and 2090s, represent mid-term and long-term planning, respectively, compared to the baseline of the climate of 1980-2000. To handle uncertainty in climate change projections, outputs from three General Circulation Models (GCMs) with Regional Climate Model (RCM) for dynamic downscaling (PCM-RCM, Hadley-RCM, and CCSM-RCM) and four CO2 emission scenarios are used to represent the various possible climatic conditions in the mid-term (2040's) and long-term (2090's) time horizons. The model results show the relative roles of mid- and long-term investments and the complementary relationships between wait-and-see decisions and here-and-now decisions on infrastructure expansion. Even the best tactical measures (irrigation operation) alone are not sufficient for drought mitigation in the future. Infrastructure expansion is critical especially for environmental conversation purposes. With increasing budget, investment should be shifted from tactical measures to strategic measures for drought preparedness. Infrastructure expansion is preferred for the long term plan than the mid-term plan, i.e., larger investment is proposed in 2040s than the current, due to a larger likelihood of drought in 2090s than 2040s. Thus larger BMP expansion is proposed in 2040s for droughts preparedness in 2090s.
Deserno, Lorenz; Huys, Quentin J M; Boehme, Rebecca; Buchert, Ralph; Heinze, Hans-Jochen; Grace, Anthony A; Dolan, Raymond J; Heinz, Andreas; Schlagenhauf, Florian
2015-02-03
Dual system theories suggest that behavioral control is parsed between a deliberative "model-based" and a more reflexive "model-free" system. A balance of control exerted by these systems is thought to be related to dopamine neurotransmission. However, in the absence of direct measures of human dopamine, it remains unknown whether this reflects a quantitative relation with dopamine either in the striatum or other brain areas. Using a sequential decision task performed during functional magnetic resonance imaging, combined with striatal measures of dopamine using [(18)F]DOPA positron emission tomography, we show that higher presynaptic ventral striatal dopamine levels were associated with a behavioral bias toward more model-based control. Higher presynaptic dopamine in ventral striatum was associated with greater coding of model-based signatures in lateral prefrontal cortex and diminished coding of model-free prediction errors in ventral striatum. Thus, interindividual variability in ventral striatal presynaptic dopamine reflects a balance in the behavioral expression and the neural signatures of model-free and model-based control. Our data provide a novel perspective on how alterations in presynaptic dopamine levels might be accompanied by a disruption of behavioral control as observed in aging or neuropsychiatric diseases such as schizophrenia and addiction.
Modeling Complex Phenomena Using Multiscale Time Sequences
2009-08-24
measures based on Hurst and Holder exponents , auto-regressive methods and Fourier and wavelet decomposition methods. The applications for this technology...relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and Holder exponents , auto-regressive...different scales and how these scales relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
Kornecki, Martin; Strube, Jochen
2018-03-16
Productivity improvements of mammalian cell culture in the production of recombinant proteins have been made by optimizing cell lines, media, and process operation. This led to enhanced titers and process robustness without increasing the cost of the upstream processing (USP); however, a downstream bottleneck remains. In terms of process control improvement, the process analytical technology (PAT) initiative, initiated by the American Food and Drug Administration (FDA), aims to measure, analyze, monitor, and ultimately control all important attributes of a bioprocess. Especially, spectroscopic methods such as Raman or near-infrared spectroscopy enable one to meet these analytical requirements, preferably in-situ. In combination with chemometric techniques like partial least square (PLS) or principal component analysis (PCA), it is possible to generate soft sensors, which estimate process variables based on process and measurement models for the enhanced control of bioprocesses. Macroscopic kinetic models can be used to simulate cell metabolism. These models are able to enhance the process understanding by predicting the dynamic of cells during cultivation. In this article, in-situ turbidity (transmission, 880 nm) and ex-situ Raman spectroscopy (785 nm) measurements are combined with an offline macroscopic Monod kinetic model in order to predict substrate concentrations. Experimental data of Chinese hamster ovary cultivations in bioreactors show a sufficiently linear correlation (R² ≥ 0.97) between turbidity and total cell concentration. PLS regression of Raman spectra generates a prediction model, which was validated via offline viable cell concentration measurement (RMSE ≤ 13.82, R² ≥ 0.92). Based on these measurements, the macroscopic Monod model can be used to determine different process attributes, e.g., glucose concentration. In consequence, it is possible to approximately calculate (R² ≥ 0.96) glucose concentration based on online cell concentration measurements using turbidity or Raman spectroscopy. Future approaches will use these online substrate concentration measurements with turbidity and Raman measurements, in combination with the kinetic model, in order to control the bioprocess in terms of feeding strategies, by employing an open platform communication (OPC) network-either in fed-batch or perfusion mode, integrated into a continuous operation of upstream and downstream.
Kornecki, Martin; Strube, Jochen
2018-01-01
Productivity improvements of mammalian cell culture in the production of recombinant proteins have been made by optimizing cell lines, media, and process operation. This led to enhanced titers and process robustness without increasing the cost of the upstream processing (USP); however, a downstream bottleneck remains. In terms of process control improvement, the process analytical technology (PAT) initiative, initiated by the American Food and Drug Administration (FDA), aims to measure, analyze, monitor, and ultimately control all important attributes of a bioprocess. Especially, spectroscopic methods such as Raman or near-infrared spectroscopy enable one to meet these analytical requirements, preferably in-situ. In combination with chemometric techniques like partial least square (PLS) or principal component analysis (PCA), it is possible to generate soft sensors, which estimate process variables based on process and measurement models for the enhanced control of bioprocesses. Macroscopic kinetic models can be used to simulate cell metabolism. These models are able to enhance the process understanding by predicting the dynamic of cells during cultivation. In this article, in-situ turbidity (transmission, 880 nm) and ex-situ Raman spectroscopy (785 nm) measurements are combined with an offline macroscopic Monod kinetic model in order to predict substrate concentrations. Experimental data of Chinese hamster ovary cultivations in bioreactors show a sufficiently linear correlation (R2 ≥ 0.97) between turbidity and total cell concentration. PLS regression of Raman spectra generates a prediction model, which was validated via offline viable cell concentration measurement (RMSE ≤ 13.82, R2 ≥ 0.92). Based on these measurements, the macroscopic Monod model can be used to determine different process attributes, e.g., glucose concentration. In consequence, it is possible to approximately calculate (R2 ≥ 0.96) glucose concentration based on online cell concentration measurements using turbidity or Raman spectroscopy. Future approaches will use these online substrate concentration measurements with turbidity and Raman measurements, in combination with the kinetic model, in order to control the bioprocess in terms of feeding strategies, by employing an open platform communication (OPC) network—either in fed-batch or perfusion mode, integrated into a continuous operation of upstream and downstream. PMID:29547557
ERIC Educational Resources Information Center
Nickerson, Carol A.; McClelland, Gary H.
1988-01-01
A methodology is developed based on axiomatic conjoint measurement to accompany a fertility decision-making model. The usefulness of the model is then demonstrated via an application to a study of contraceptive choice (N=100 male and female family-planning clinic clients). Finally, the validity of the model is evaluated. (TJH)
ERIC Educational Resources Information Center
Jones, Thomas; Laughlin, Thomas
2009-01-01
Nothing could be more effective than a wilderness experience to demonstrate the importance of conserving biodiversity. When that is not possible, though, there are computer models with several features that are helpful in understanding how biodiversity is measured. These models are easily used when natural resources, transportation, and time…
Measuring the "Unmeasurable": An Inquiry Model and Test for the Social Studies.
ERIC Educational Resources Information Center
Van Scotter, Richard D.; Haas, John D.
New social studies materials are based on inquiry modes of learning and teaching; however, little is known as to what students actually learn from an inquiry model (except for cognitive knowledge). An inquiry model and test to measure the "unmeasurable" in the social studies--namely, a student's ability to use the scientific process, attitudes…
Walz, Yvonne; Wegmann, Martin; Leutner, Benjamin; Dech, Stefan; Vounatsou, Penelope; N'Goran, Eliézer K; Raso, Giovanna; Utzinger, Jürg
2015-11-30
Schistosomiasis is a widespread water-based disease that puts close to 800 million people at risk of infection with more than 250 million infected, mainly in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and the frequency, duration and extent of human bodies exposed to infested water sources during human water contact. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. Since schistosomiasis risk profiling based on remote sensing data inherits a conceptual drawback if school-based disease prevalence data are directly related to the remote sensing measurements extracted at the location of the school, because the disease transmission usually does not exactly occur at the school, we took the local environment around the schools into account by explicitly linking ecologically relevant environmental information of potential disease transmission sites to survey measurements of disease prevalence. Our models were validated at two sites with different landscapes in Côte d'Ivoire using high- and moderate-resolution remote sensing data based on random forest and partial least squares regression. We found that the ecologically relevant modelling approach explained up to 70% of the variation in Schistosoma infection prevalence and performed better compared to a purely pixel-based modelling approach. Furthermore, our study showed that model performance increased as a function of enlarging the school catchment area, confirming the hypothesis that suitable environments for schistosomiasis transmission rarely occur at the location of survey measurements.
NASA Astrophysics Data System (ADS)
Sur, D.; Paul, A.
2017-12-01
The equatorial ionosphere shows sharp diurnal and latitudinal Total Electron Content (TEC) variations over a major part of the day. Equatorial ionosphere also exhibits intense post-sunset ionospheric irregularities. Accurate prediction of TEC in these low latitudes is not possible from standard ionospheric models. An Artificial Neural Network (ANN) based Vertical TEC (VTEC) model has been designed using TEC data in low latitude Indian longitude sector for accurate prediction of VTEC. GPS TEC data from the stations Calcutta (22.58°N, 88.38°E geographic, magnetic dip 32°), Baharampore (24.09°N, 88.25°E geographic, magnetic dip 35°) and Siliguri (26.72°N, 88.39°E geographic; magnetic dip 40°) are used as training dataset for the duration of January 2007-September 2011. Poleward VTEC gradients from northern EIA crest to region beyond EIA crest have been calculated from measured VTEC and compared with that obtained from ANN based VTEC model. TEC data from Calcutta and Siliguri are used to compute VTEC gradients during April 2013 and August-September 2013. It has been observed that poleward VTEC gradient computed from ANN based TEC model has shown good correlation with measured values during vernal and autumnal equinoxes of high solar activity periods of 2013. Possible correlation between measured poleward TEC gradients and post-sunset scintillations (S4 ≥ 0.4) from northern crest of EIA has been observed in this paper. From the observation, a suitable threshold poleward VTEC gradient has been proposed for possible occurrence of post-sunset scintillations at northern crest of EIA along 88°E longitude. Poleward VTEC gradients obtained from ANN based VTEC model are used to forecast possible ionospheric scintillation after post-sunset period using the threshold value. It has been observed that these predicted VTEC gradients obtained from ANN based VTEC model can forecast post-sunset L-band scintillation with an accuracy of 67% to 82% in this dynamic low latitude region. The use of VTEC gradients from ANN based VTEC model removes the necessity of continuous operation of multi-station ground based TEC receivers in this low latitude region.
Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].
DOT National Transportation Integrated Search
2013-10-01
Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...
Typing SNP based on the near-infrared spectroscopy and artificial neural network
NASA Astrophysics Data System (ADS)
Ren, Li; Wang, Wei-Peng; Gao, Yu-Zhen; Yu, Xiao-Wei; Xie, Hong-Ping
2009-07-01
Based on the near-infrared spectra (NIRS) of the measured samples as the discriminant variables of their genotypes, the genotype discriminant model of SNP has been established by using back-propagation artificial neural network (BP-ANN). Taking a SNP (857G > A) of N-acetyltransferase 2 (NAT2) as an example, DNA fragments containing the SNP site were amplified by the PCR method based on a pair of primers to obtain the three-genotype (GG, AA, and GA) modeling samples. The NIRS-s of the amplified samples were directly measured in transmission by using quartz cell. Based on the sample spectra measured, the two BP-ANN-s were combined to obtain the stronger ability of the three-genotype classification. One of them was established to compress the measured NIRS variables by using the resilient back-propagation algorithm, and another network established by Levenberg-Marquardt algorithm according to the compressed NIRS-s was used as the discriminant model of the three-genotype classification. For the established model, the root mean square error for the training and the prediction sample sets were 0.0135 and 0.0132, respectively. Certainly, this model could rightly predict the three genotypes (i.e. the accuracy of prediction samples was up to100%) and had a good robust for the prediction of unknown samples. Since the three genotypes of SNP could be directly determined by using the NIRS-s without any preprocessing for the analyzed samples after PCR, this method is simple, rapid and low-cost.
Breast Cancer Screening in an Era of Personalized Regimens
Onega, Tracy; Beaber, Elisabeth F.; Sprague, Brian L.; Barlow, William E.; Haas, Jennifer S.; Tosteson, Anna N.A.; Schnall, Mitchell D.; Armstrong, Katrina; Schapira, Marilyn M.; Geller, Berta; Weaver, Donald L.; Conant, Emily F.
2014-01-01
Breast cancer screening holds a prominent place in public health, health care delivery, policy, and women’s health care decisions. Several factors are driving shifts in how population-based breast cancer screening is approached, including advanced imaging technologies, health system performance measures, health care reform, concern for “overdiagnosis,” and improved understanding of risk. Maximizing benefits while minimizing the harms of screening requires moving from a “1-size-fits-all” guideline paradigm to more personalized strategies. A refined conceptual model for breast cancer screening is needed to align women’s risks and preferences with screening regimens. A conceptual model of personalized breast cancer screening is presented herein that emphasizes key domains and transitions throughout the screening process, as well as multilevel perspectives. The key domains of screening awareness, detection, diagnosis, and treatment and survivorship are conceptualized to function at the level of the patient, provider, facility, health care system, and population/policy arena. Personalized breast cancer screening can be assessed across these domains with both process and outcome measures. Identifying, evaluating, and monitoring process measures in screening is a focus of a National Cancer Institute initiative entitled PROSPR (Population-based Research Optimizing Screening through Personalized Regimens), which will provide generalizable evidence for a risk-based model of breast cancer screening, The model presented builds on prior breast cancer screening models and may serve to identify new measures to optimize benefits-to-harms tradeoffs in population-based screening, which is a timely goal in the era of health care reform. PMID:24830599
Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation
De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan
2017-01-01
In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436
[Measurement of soil organic matter and available K based on SPA-LS-SVM].
Zhang, Hai-Liang; Liu, Xue-Mei; He, Yong
2014-05-01
Visible and short wave infrared spectroscopy (Vis/SW-NIRS) was investigated in the present study for measurement of soil organic matter (OM) and available potassium (K). Four types of pretreatments including smoothing, SNV, MSC and SG smoothing+first derivative were adopted to eliminate the system noises and external disturbances. Then partial least squares regression (PLSR) and least squares-support vector machine (LS-SVM) models were implemented for calibration models. The LS-SVM model was built by using characteristic wavelength based on successive projections algorithm (SPA). Simultaneously, the performance of LSSVM models was compared with PLSR models. The results indicated that LS-SVM models using characteristic wavelength as inputs based on SPA outperformed PLSR models. The optimal SPA-LS-SVM models were achieved, and the correlation coefficient (r), and RMSEP were 0. 860 2 and 2. 98 for OM and 0. 730 5 and 15. 78 for K, respectively. The results indicated that visible and short wave near infrared spectroscopy (Vis/SW-NIRS) (325 approximately 1 075 nm) combined with LS-SVM based on SPA could be utilized as a precision method for the determination of soil properties.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Comparison of co-expression measures: mutual information, correlation, and model based indices.
Song, Lin; Langfelder, Peter; Horvath, Steve
2012-12-09
Co-expression measures are often used to define networks among genes. Mutual information (MI) is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust) correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes). We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation) leads to modules that are superior to MI based modules and maximal information coefficient (MIC) based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non-linear relationships. Our results indicate that MI networks can safely be replaced by correlation networks when it comes to measuring co-expression relationships in stationary data.
NASA Astrophysics Data System (ADS)
Laska, K.; Prosek, P.; Budik, L.; Budikova, M.
2009-04-01
The results of global solar and erythemally effective ultraviolet (EUV) radiation measurements are presented. The radiation data were collected within the period of 2006-2007 at the Czech Antarctic station J. G. Mendel, James Ross Island (63°48'S, 57°53'W). Global solar radiation was measured by a Kipp&Zonen CM11 pyranometer. EUV radiation was measured according to the McKinley and Diffey Erythemal Action Spectrum with a Solar Light broadband UV-Biometer Model 501A. The effects of stratospheric ozone concentration and cloudiness (estimated as cloud impact factor from global solar radiation) on the intensity of incident EUV radiation were calculated by a non-linear regression model. The total ozone content (TOC) and cloud/surface reflectivity derived from satellite-based measurements were applied into the model for elimination of the uncertainties in measured ozone values. There were two input data of TOC used in the model. The first were taken from the Dobson spectrophotometer measurements (Argentinean Antarctic station Marambio), the second was acquired for geographical coordinates of the Mendel Station from the EOS Aura Ozone Monitoring Instrument and V8.5 algorithm. Analysis of measured EUV data showed that variable cloudiness affected rather short-term fluctuations of the radiation fluxes, while ozone declines caused long-term UV radiation increase in the second half of the year. The model predicted about 98 % variability of the measured EUV radiation. The residuals between measured and modeled EUV radiation intensities were evaluated separately for the above-specified two TOC datasets, parts of seasons and cloud impact factor (cloudiness). The mean average prediction error was used for model validation according to the cloud impact factor and satellite-based reflectivity data.
NASA Astrophysics Data System (ADS)
Nergui, T.; Lee, Y.; Chung, S. H.; Lamb, B. K.; Yokelson, R. J.; Barsanti, K.
2017-12-01
A number of chamber and field measurements have shown that atmospheric organic aerosols and their precursors produced from wildfires are significantly underestimated in the emission inventories used for air quality models for various applications such as regulatory strategy development, impact assessments of air pollutants, and air quality forecasting for public health. The AIRPACT real-time air quality forecasting system consistently underestimates surface level fine particulate matter (PM2.5) concentrations in the summer at both urban and rural locations in the Pacific Northwest, primarily result of errors in organic particulate matter. In this work, we implement updated chemical speciation and emission factors based on FLAME-IV (Fourth Fire Lab at Missoula Experiment) and other measurements in the Blue-Sky fire emission model and the SMOKE emission preprocessor; and modified parameters for the secondary organic aerosol (SOA) module in CMAQ chemical transport model of the AIRPACT modeling system. Simulation results from CMAQ version 5.2 which has a better treatment for anthropogenic SOA formation (as a base case) and modified parameterization used for fire emissions and chemistry in the model (fire-soa case) are evaluated against airborne measurements downwind of the Big Windy Complex Fire and the Colockum Tarps Fire, both of which occurred in the Pacific Northwest in summer 2013. Using the observed aerosol chemical composition and mass loadings for organics, nitrate, sulfate, ammonium, and chloride from aircraft measurements during the Studies of Emissions and Atmospheric Composition, Clouds, and Climate Coupling by Regional Surveys (SEAC4RS) and the Biomass Burning Observation Project (BBOP), we assess how new knowledge gained from wildfire measurements improve model predictions for SOA and its contribution to the total mass of PM2.5 concentrations.
Oxygen Pickup Ions Measured by MAVEN Outside the Martian Bow Shock
NASA Astrophysics Data System (ADS)
Rahmati, A.; Cravens, T.; Larson, D. E.; Lillis, R. J.; Dunn, P.; Halekas, J. S.; Connerney, J. E. P.; Eparvier, F. G.; Thiemann, E.; Mitchell, D. L.; Jakosky, B. M.
2015-12-01
The MAVEN (Mars Atmosphere and Volatile EvolutioN) spacecraft entered orbit around Mars on September 21, 2014 and has since been detecting energetic oxygen pickup ions by its SEP (Solar Energetic Particles) and SWIA (Solar Wind Ion Analyzer) instruments. The oxygen pickup ions detected outside the Martian bowshock and in the upstream solar wind are associated with the extended hot oxygen exosphere of Mars, which is created mainly by the dissociative recombination of molecular oxygen ions with electrons in the ionosphere. We use analytic solutions to the equations of motion of pickup ions moving in the undisturbed upstream solar wind magnetic and motional electric fields and calculate the flux of oxygen pickup ions at the location of MAVEN. Our model calculates the ionization rate of oxygen atoms in the exosphere based on the hot oxygen densities predicted by Rahmati et al. (2014), and the sources of ionization include photo-ionization, charge exchange, and electron impact ionization. The photo-ionization frequency is calculated using the FISM (Flare Irradiance Spectral Model) solar flux model, based on MAVEN EUVM (Extreme Ultra-Violet Monitor) measurements. The frequency of charge exchange between a solar wind proton and an oxygen atom is calculated using MAVEN SWIA solar wind proton flux measurements, and the electron impact ionization frequency is calculated based on MAVEN SWEA (Solar Wind Electron Analyzer) solar wind electron flux measurements. The solar wind magnetic field used in the model is from the measurements taken by MAVEN MAG (magnetometer) in the upstream solar wind. The good agreement between our predicted pickup oxygen fluxes and the MAVEN SEP and SWIA measured ones confirms detection of oxygen pickup ions and these model-data comparisons can be used to constrain models of hot oxygen densities and photochemical escape flux.
Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2011-11-01
We describe an empirical model for exposure to respirable crystalline silica (RCS) to create a quantitative job-exposure matrix (JEM) for community-based studies. Personal measurements of exposure to RCS from Europe and Canada were obtained for exposure modelling. A mixed-effects model was elaborated, with region/country and job titles as random effect terms. The fixed effect terms included year of measurement, measurement strategy (representative or worst-case), sampling duration (minutes) and a priori exposure intensity rating for each job from an independently developed JEM (none, low, high). 23,640 personal RCS exposure measurements, covering a time period from 1976 to 2009, were available for modelling. The model indicated an overall downward time trend in RCS exposure levels of -6% per year. Exposure levels were higher in the UK and Canada, and lower in Northern Europe and Germany. Worst-case sampling was associated with higher reported exposure levels and an increase in sampling duration was associated with lower reported exposure levels. Highest predicted RCS exposure levels in the reference year (1998) were for chimney bricklayers (geometric mean 0.11 mg m(-3)), monument carvers and other stone cutters and carvers (0.10 mg m(-3)). The resulting model enables us to predict time-, job-, and region/country-specific exposure levels of RCS. These predictions will be used in the SYNERGY study, an ongoing pooled multinational community-based case-control study on lung cancer.
Feasibility study of a space-based high pulse energy 2 μm CO2 IPDA lidar.
Singh, Upendra N; Refaat, Tamer F; Ismail, Syed; Davis, Kenneth J; Kawa, Stephan R; Menzies, Robert T; Petros, Mulugeta
2017-08-10
Sustained high-quality column carbon dioxide (CO 2 ) atmospheric measurements from space are required to improve estimates of regional and continental-scale sources and sinks of CO 2 . Modeling of a space-based 2 μm, high pulse energy, triple-pulse, direct detection integrated path differential absorption (IPDA) lidar was conducted to demonstrate CO 2 measurement capability and to evaluate random and systematic errors. Parameters based on recent technology developments in the 2 μm laser and state-of-the-art HgCdTe (MCT) electron-initiated avalanche photodiode (e-APD) detection system were incorporated in this model. Strong absorption features of CO 2 in the 2 μm region, which allows optimum lower tropospheric and near surface measurements, were used to project simultaneous measurements using two independent altitude-dependent weighting functions with the triple-pulse IPDA. Analysis of measurements over a variety of atmospheric and aerosol models using a variety of Earth's surface target and aerosol loading conditions were conducted. Water vapor (H 2 O) influences on CO 2 measurements were assessed, including molecular interference, dry-air estimate, and line broadening. Projected performance shows a <0.35 ppm precision and a <0.3 ppm bias in low-tropospheric weighted measurements related to column CO 2 optical depth for the space-based IPDA using 10 s signal averaging over the Railroad Valley (RRV) reference surface under clear and thin cloud conditions.
NASA Astrophysics Data System (ADS)
Brocklehurst, Aidan; Boon, Alex; Barlow, Janet; Hayden, Paul; Robins, Alan
2014-05-01
The source area of an instrument is an estimate of the area of ground over which the measurement is generated. Quantification of the source area of a measurement site provides crucial context for analysis and interpretation of the data. A range of computational models exists to calculate the source area of an instrument, but these are usually based on assumptions which do not hold for instruments positioned very close to the surface, particularly those surrounded by heterogeneous terrain i.e. urban areas. Although positioning instrumentation at higher elevation (i.e. on masts) is ideal in urban areas, this can be costly in terms of installation and maintenance costs and logistically difficult to position instruments in the ideal geographical location. Therefore, in many studies, experimentalists turn to rooftops to position instrumentation. Experimental validations of source area models for these situations are very limited. In this study, a controlled tracer gas experiment was conducted in a wind tunnel based on a 1:200 scale model of a measurement site used in previous experimental work in central London. The detector was set at the location of the rooftop site as the tracer was released at a range of locations within the surrounding streets and rooftops. Concentration measurements are presented for a range of wind angles, with the spread of concentration measurements indicative of the source area distribution. Clear evidence of wind channeling by streets is seen with the shape of the source area strongly influenced by buildings upwind of the measurement point. The results of the wind tunnel study are compared to scalar concentration source areas generated by modelling approaches based on meteorological data from the central London experimental site and used in the interpretation of continuous carbon dioxide (CO2) concentration data. Initial conclusions will be drawn as to how to apply scalar concentration source area models to rooftop measurement sites and suggestions for their improvement to incorporate effects such as channeling.
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz
2017-07-01
This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.
Hopkins, D L; Safari, E; Thompson, J M; Smith, C R
2004-06-01
A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically.
Estimation of the sea surface's two-scale backscatter parameters
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1978-01-01
The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomar, Vikas; Haque, Aman; Hattar, Khalid
In-core nuclear materials including fuel pins and cladding materials fail due to issues including corrosion, mechanical wear, and pellet cladding interaction. In most such scenario microstructure dependent and corrosioninduced chemistry dependent property changes significantly affect performance of cladding, pellet, and housing. Emphasis of this work was on replace conventional pellet-cladding material models with a new straingradient viscoplasticity model that is informed by transmission electron microscopy (TEM) based measurements and by nanomechanical Raman spectroscopy (NMRS) based measurements. The TEM measurements are quantitative in nature and therefore reveal stress-strain relations with simultaneous insights into mechanisms of deformation at nanoscale. The NMRS measurementsmore » reveal the similar information at mesoscale along with additional information on relating local microstructural stresses with applied stresses. The resulting information is used to fit constants in the strain gradient viscoplasticity model as well as to validate one. During TEM measurements, a micro-electro-mechanical system based setup was developed with mechanical actuation, sensing, heating, and electrical loading. Contrary to post-mortem analysis or qualitative visualization, this setup combines direct visualization of the mechanisms behind deformation with measurement of stress, strain, thermal and electrical properties. The unique research philosophy of visualizing the microstructure at high resolution while measuring the properties led to fundamental understanding in grain size and temperature effects on measured mechanical properties such as fracture toughness. A key contribution is the role of mechanical loading boundary conditions to deconvolute the insitu TEM based nanoscale and NMRS based mesoscale data to bulk behavior. First the literature based pellet cladding mechanical interaction model based on the work of Retel’s and Williamson’s in literature work to predict tempurature and stress distribution in cladding and pellet at normal operating condition was analyzed. Later the data was fitted to find constants for a viscoplastic strain gradient model. The developed model still needs to be refined and calibrated using various experimental results. That remains the focus of future work. Overall, a major thrust of the work was therefore on active control of the microstructure (grain size, defect density and types) exploiting the multi-physics coupling in materials. In particular, using experiments the synergy of current density, mechanical stress and temperature were studied to annihilate defects and recrystallize grains. The developed model is being examined for implementation in BISON. Multiple invited talks, international journal publications, and conference publications were performed by students supported on this work. Another output is support multiple PhD and masters thesis students who will be an important asset for future basic nuclear research. Future Work Recommendations: A nuclear reactor operates under significant variations of thermal loads due to energy cycling and mechanical loads due to constraint effects. Significant thermal and chemical diffusion takes place at the pallet-cladding level. While the proposed work established new experimental approach and new dataset for Zircaloy-4, the irradiation level was in the range of 1-2 dpa. Samples with higher dpa need to be examined. Therefore, a continual of support of the performed work is essential. Currently, these are the only experiments that can measure the produced data. The work also needs to be extended to different fuel types and cladding types such as SiC and FeCrAl based claddings. A combination of datasets for these materials can then be used to analyze accurately predict behavior of critical pellet cladding systems in accident scenario with high heat flux and high thermal loads. This is a BIG unknown as if now.« less
Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld
2007-08-01
In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.
NASA Astrophysics Data System (ADS)
Zaib Jadoon, Khan; Umer Altaf, Muhammad; McCabe, Matthew Francis; Hoteit, Ibrahim; Muhammad, Nisar; Moghadas, Davood; Weihermüller, Lutz
2017-10-01
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In MCMC the posterior distribution is computed using Bayes' rule. The electromagnetic forward model based on the full solution of Maxwell's equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD Mini-Explorer. Uncertainty in the parameters for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness as compared to layers electrical conductivity are not very informative and are therefore difficult to resolve. Application of the proposed MCMC-based inversion to field measurements in a drip irrigation system demonstrates that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provides useful insight about parameter uncertainty for the assessment of the model outputs.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
ERIC Educational Resources Information Center
Wei, Silin; Liu, Xiufeng; Wang, Zuhao; Wang, Xingqiao
2012-01-01
Research suggests that difficulty in making connections among three levels of chemical representations--macroscopic, submicroscopic, and symbolic--is a primary reason for student alternative conceptions of chemistry concepts, and computer modeling is promising to help students make the connections. However, no computer modeling-based assessment…
Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…
NASA Astrophysics Data System (ADS)
Tain, Rong-Wen; Alperin, Noam
2008-03-01
Intracranial compliance (ICC) determines the ability of the intracranial space to accommodate increase in volume (e.g., brain swelling) without a large increase in intracranial pressure (ICP). Therefore, measurement of ICC is potentially important for diagnosis and guiding treatment of related neurological problems. Modeling based approach uses an assumed lumped-parameter model of the craniospinal system (CSS) (e.g., RCL circuit), with either the arterial or the net transcranial blood flow (arterial inflow minus venous outflow) as input and the cranio-spinal cerebrospinal fluid (CSF) flow as output. The phase difference between the output and input is then often used as a measure of ICC However, it is not clear whether there is a predetermined relationship between ICC and the phase difference between these waveforms. A different approach for estimation of ICC has been recently proposed. This approach estimates ICC from the ratio of the intracranial volume and pressure changes that occur naturally with each heartbeat. The current study evaluates the sensitivity of the phase-based and the direct approach to changes in ICC. An RLC circuit model of the cranio-spinal system is used to simulate the cranio-spinal CSF flow for 3 different ICC states using the transcranial blood flows measured by MRI phase contrast from healthy human subjects. The effect of the increase in the ICC on the magnitude and phase response is calculated from the system's transfer function. We observed that within the heart rate frequency range, changes in ICC predominantly affected the amplitude of CSF pulsation and less so the phases. The compliance is then obtained for the different ICC states using the direct approach. The measures of compliance calculated using the direct approach demonstrated the highest sensitivity for changes in ICC. This work explains why phase shift based measure of ICC is less sensitive than amplitude based measures such as the direct approach method.
Cardiovascular oscillations: in search of a nonlinear parametric model
NASA Astrophysics Data System (ADS)
Bandrivskyy, Andriy; Luchinsky, Dmitry; McClintock, Peter V.; Smelyanskiy, Vadim; Stefanovska, Aneta; Timucin, Dogan
2003-05-01
We suggest a fresh approach to the modeling of the human cardiovascular system. Taking advantage of a new Bayesian inference technique, able to deal with stochastic nonlinear systems, we show that one can estimate parameters for models of the cardiovascular system directly from measured time series. We present preliminary results of inference of parameters of a model of coupled oscillators from measured cardiovascular data addressing cardiorespiratory interaction. We argue that the inference technique offers a very promising tool for the modeling, able to contribute significantly towards the solution of a long standing challenge -- development of new diagnostic techniques based on noninvasive measurements.
Modal phase measuring deflectometry
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-10-14
Here in this work, a model based method is applied to phase measuring deflectometry, which is named as modal phase measuring deflectometry. The height and slopes of the surface under test are represented by mathematical models and updated by optimizing the model coefficients to minimize the discrepancy between the reprojection in ray tracing and the actual measurement. The pose of the screen relative to the camera is pre-calibrated and further optimized together with the shape coefficients of the surface under test. Simulations and experiments are conducted to demonstrate the feasibility of the proposed approach.
Measurements and modelling of base station power consumption under real traffic loads.
Lorincz, Josip; Garma, Tonko; Petrovic, Goran
2012-01-01
Base stations represent the main contributor to the energy consumption of a mobile cellular network. Since traffic load in mobile networks significantly varies during a working or weekend day, it is important to quantify the influence of these variations on the base station power consumption. Therefore, this paper investigates changes in the instantaneous power consumption of GSM (Global System for Mobile Communications) and UMTS (Universal Mobile Telecommunications System) base stations according to their respective traffic load. The real data in terms of the power consumption and traffic load have been obtained from continuous measurements performed on a fully operated base station site. Measurements show the existence of a direct relationship between base station traffic load and power consumption. According to this relationship, we develop a linear power consumption model for base stations of both technologies. This paper also gives an overview of the most important concepts which are being proposed to make cellular networks more energy-efficient.
Measurements and Modelling of Base Station Power Consumption under Real Traffic Loads †
Lorincz, Josip; Garma, Tonko; Petrovic, Goran
2012-01-01
Base stations represent the main contributor to the energy consumption of a mobile cellular network. Since traffic load in mobile networks significantly varies during a working or weekend day, it is important to quantify the influence of these variations on the base station power consumption. Therefore, this paper investigates changes in the instantaneous power consumption of GSM (Global System for Mobile Communications) and UMTS (Universal Mobile Telecommunications System) base stations according to their respective traffic load. The real data in terms of the power consumption and traffic load have been obtained from continuous measurements performed on a fully operated base station site. Measurements show the existence of a direct relationship between base station traffic load and power consumption. According to this relationship, we develop a linear power consumption model for base stations of both technologies. This paper also gives an overview of the most important concepts which are being proposed to make cellular networks more energy-efficient. PMID:22666026
NASA Astrophysics Data System (ADS)
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui
2013-01-01
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui
2013-10-01
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha
2018-01-01
It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.
Researches of fruit quality prediction model based on near infrared spectrum
NASA Astrophysics Data System (ADS)
Shen, Yulin; Li, Lian
2018-04-01
With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.
SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL
Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan
2013-01-01
Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
NASA Technical Reports Server (NTRS)
1992-01-01
The objectives, status, and accomplishments of the research tasks supported under the NASA Upper Atmosphere Research Program (UARP) are presented. The topics covered include the following: balloon-borne in situ measurements; balloon-borne remote measurements; ground-based measurements; aircraft-borne measurements; rocket-borne measurements; instrument development; reaction kinetics and photochemistry; spectroscopy; stratospheric dynamics and related analysis; stratospheric chemistry, analysis, and related modeling; and global chemical modeling.
The Space Shuttle Orbiter molecular environment induced by the supplemental flash evaporator system
NASA Technical Reports Server (NTRS)
Ehlers, H. K. F.
1985-01-01
The water vapor environment of the Space Shuttle Orbiter induced by the supplemental flash evaporator during the on-orbit flight phase has been analyzed based on Space II model predictions and orbital flight measurements. Model data of local density, column density, and return flux are presented. Results of return flux measurements with a mass spectrometer during STS-2 and of direct flux measurements during STS-4 are discussed and compared with model predictions.
Chromatic Image Analysis For Quantitative Thermal Mapping
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
1995-01-01
Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.
Using attractiveness model for actors ranking in social media networks.
Qasem, Ziyaad; Jansen, Marc; Hecking, Tobias; Hoppe, H Ulrich
2017-01-01
Influential actors detection in social media such as Twitter or Facebook can play a major role in gathering opinions on particular topics, improving the marketing efficiency, predicting the trends, etc. This work aims to extend our formally defined T measure to present a new measure aiming to recognize the actor's influence by the strength of attracting new important actors into a networked community. Therefore, we propose a model of the actor's influence based on the attractiveness of the actor in relation to the number of other attractors with whom he/she has established connections over time. Using an empirically collected social network for the underlying graph, we have applied the above-mentioned measure of influence in order to determine optimal seeds in a simulation of influence maximization. We study our extended measure in the context of information diffusion because this measure is based on a model of actors who attract others to be active members in a community. This corresponds to the idea of the IC simulation model which is used to identify the most important spreaders in a set of actors.
Lu, Xi; Nahum-Shani, Inbal; Kasari, Connie; Lynch, Kevin G.; Oslin, David W.; Pelham, William E.; Fabiano, Gregory; Almirall, Daniel
2016-01-01
A dynamic treatment regime (DTR) is a sequence of decision rules, each of which recommends a treatment based on a patient’s past and current health status. Sequential, multiple assignment, randomized trials (SMARTs) are multi-stage trial designs that yield data specifically for building effective DTRs. Modeling the marginal mean trajectories of a repeated-measures outcome arising from a SMART presents challenges, because traditional longitudinal models used for randomized clinical trials do not take into account the unique design features of SMART. We discuss modeling considerations for various forms of SMART designs, emphasizing the importance of considering the timing of repeated measures in relation to the treatment stages in a SMART. For illustration, we use data from three SMART case studies with increasing level of complexity, in autism, child attention deficit hyperactivity disorder (ADHD), and adult alcoholism. In all three SMARTs we illustrate how to accommodate the design features along with the timing of the repeated measures when comparing DTRs based on mean trajectories of the repeated-measures outcome. PMID:26638988
Lu, Xi; Nahum-Shani, Inbal; Kasari, Connie; Lynch, Kevin G; Oslin, David W; Pelham, William E; Fabiano, Gregory; Almirall, Daniel
2016-05-10
A dynamic treatment regime (DTR) is a sequence of decision rules, each of which recommends a treatment based on a patient's past and current health status. Sequential, multiple assignment, randomized trials (SMARTs) are multi-stage trial designs that yield data specifically for building effective DTRs. Modeling the marginal mean trajectories of a repeated-measures outcome arising from a SMART presents challenges, because traditional longitudinal models used for randomized clinical trials do not take into account the unique design features of SMART. We discuss modeling considerations for various forms of SMART designs, emphasizing the importance of considering the timing of repeated measures in relation to the treatment stages in a SMART. For illustration, we use data from three SMART case studies with increasing level of complexity, in autism, child attention deficit hyperactivity disorder, and adult alcoholism. In all three SMARTs, we illustrate how to accommodate the design features along with the timing of the repeated measures when comparing DTRs based on mean trajectories of the repeated-measures outcome. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Srivastava, P. K.; Han, D.; Rico-Ramirez, M. A.; Bray, M.; Islam, T.; Petropoulos, G.; Gupta, M.
2015-12-01
Hydro-meteorological variables such as Precipitation and Reference Evapotranspiration (ETo) are the most important variables for discharge prediction. However, it is not always possible to get access to them from ground based measurements, particularly in ungauged catchments. The mesoscale model WRF (Weather Research & Forecasting model) can be used for prediction of hydro-meteorological variables. However, hydro-meteorologists would like to know how well the downscaled global data products are as compared to ground based measurements and whether it is possible to use the downscaled data for ungauged catchments. Even with gauged catchments, most of the stations have only rain and flow gauges installed. Measurements of other weather hydro-meteorological variables such as solar radiation, wind speed, air temperature, and dew point are usually missing and thus complicate the problems. In this study, for downscaling the global datasets, the WRF model is setup over the Brue catchment with three nested domains (D1, D2 and D3) of horizontal grid spacing of 81 km, 27 km and 9 km are used. The hydro-meteorological variables are downscaled using the WRF model from the National Centers for Enviromental Prediction (NCEP) reanalysis datasets and subsequently used for the ETo estimation using the Penman Monteith equation. The analysis of weather variables and precipitation are compared against the ground based datasets, which indicate that the datasets are in agreement with the observed datasets for complete monitoring period as well as during the seasons except precipitation whose performance is poorer in comparison to the measured rainfall. After a comparison, the WRF estimated precipitation and ETo are then used as a input parameter in the Probability Distributed Model (PDM) for discharge prediction. The input data and model parameter sensitivity analysis and uncertainty estimation are also taken into account for the PDM calibration and prediction following the Generalised Likelihood Uncertainty Estimation (GLUE) approach. The overall analysis suggests that the uncertainty estimates in predicted discharge using WRF downscaled ETo have comparable performance to ground based observed datasets and hence is promising for discharge prediction in the absence of ground based measurements.
Liu, Xingguo; Niu, Jianwei; Ran, Linghua; Liu, Taijie
2017-08-01
This study aimed to develop estimation formulae for the total human body volume (BV) of adult males using anthropometric measurements based on a three-dimensional (3D) scanning technique. Noninvasive and reliable methods to predict the total BV from anthropometric measurements based on a 3D scan technique were addressed in detail. A regression analysis of BV based on four key measurements was conducted for approximately 160 adult male subjects. Eight total models of human BV show that the predicted results fitted by the regression models were highly correlated with the actual BV (p < 0.001). Two metrics, the mean value of the absolute difference between the actual and predicted BV (V error ) and the mean value of the ratio between V error and actual BV (RV error ), were calculated. The linear model based on human weight was recommended as the most optimal due to its simplicity and high efficiency. The proposed estimation formulae are valuable for estimating total body volume in circumstances in which traditional underwater weighing or air displacement plethysmography is not applicable or accessible. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
A physical model of ice sheet response to changes in subglacial hydrology
NASA Astrophysics Data System (ADS)
Andrews, L. C.; Catania, G. A.; Buttles, J. L.; Andrews, A.; Markowski, M.
2010-12-01
Using a physical ice sheet model, we investigate the degree to which motion is controlled by local loss of basal traction versus longitudinal coupling during diurnal, seasonal, and event-type water pulses. Our model can be used to reproduce the spatial pattern and magnitude of ice surface displacements and can aid in the interpretation of ground-based GPS measurements, as it eliminates many of the complicating factors influencing surface velocity measurements. This model consists of a 3 x 1.5 meter plastic box with a grid of holes on the bed used to inject water directly between the interface of the box and a silicone polymer. Water flow is visualized using a colored dye. The polymer response to perturbations in water flow is measured by tracking surface markers through a series of overhead images. We report on a suite of experiments that explore the relationship between water discharge, basal traction, and surface displacements and compare our results to ground-based GPS measurements from a transect in western Greenland.
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Conditioning of FRF measurements for use with frequency based substructuring
NASA Astrophysics Data System (ADS)
Nicgorski, Dana; Avitabile, Peter
2010-02-01
Frequency based substructuring approaches have been used for the generation of system models from component data. While numerical models show successful results, there have been many difficulties with actual measurements in many instances. Previous work has identified some of these typical problems using simulated data to incorporate specific measurement difficulties commonly observed along with approaches to overcome some of these difficulties. This paper presents the results using actual measured data for a laboratory structure subjected to both analytical and experimental studies. Various commonly used approaches are shown to illustrate some of the difficulties with measured data. A new approach to better condition the measured functions and purge commonly found data measurement contaminants is utilized to provide dramatically improved results. Several cases are explored to show the difficulties commonly observed as well as the improved conditioning of the measured data to obtain acceptable results.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
De Bondt, Niki; Van Petegem, Peter
2015-01-01
The Overexcitability Questionnaire-Two (OEQ-II) measures the degree and nature of overexcitability, which assists in determining the developmental potential of an individual according to Dabrowski's Theory of Positive Disintegration. Previous validation studies using frequentist confirmatory factor analysis, which postulates exact parameter constraints, led to model rejection and a long series of model modifications. Bayesian structural equation modeling (BSEM) allows the application of zero-mean, small-variance priors for cross-loadings, residual covariances, and differences in measurement parameters across groups, better reflecting substantive theory and leading to better model fit and less overestimation of factor correlations. Our BSEM analysis with a sample of 516 students in higher education yields positive results regarding the factorial validity of the OEQ-II. Likewise, applying BSEM-based alignment with approximate measurement invariance, the absence of non-invariant factor loadings and intercepts across gender is supportive of the psychometric quality of the OEQ-II. Compared to males, females scored significantly higher on emotional and sensual overexcitability, and significantly lower on psychomotor overexcitability. PMID:26733931
De Bondt, Niki; Van Petegem, Peter
2015-01-01
The Overexcitability Questionnaire-Two (OEQ-II) measures the degree and nature of overexcitability, which assists in determining the developmental potential of an individual according to Dabrowski's Theory of Positive Disintegration. Previous validation studies using frequentist confirmatory factor analysis, which postulates exact parameter constraints, led to model rejection and a long series of model modifications. Bayesian structural equation modeling (BSEM) allows the application of zero-mean, small-variance priors for cross-loadings, residual covariances, and differences in measurement parameters across groups, better reflecting substantive theory and leading to better model fit and less overestimation of factor correlations. Our BSEM analysis with a sample of 516 students in higher education yields positive results regarding the factorial validity of the OEQ-II. Likewise, applying BSEM-based alignment with approximate measurement invariance, the absence of non-invariant factor loadings and intercepts across gender is supportive of the psychometric quality of the OEQ-II. Compared to males, females scored significantly higher on emotional and sensual overexcitability, and significantly lower on psychomotor overexcitability.
Reliability of Total Test Scores When Considered as Ordinal Measurements
ERIC Educational Resources Information Center
Biswas, Ajoy Kumar
2006-01-01
This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Performance-Based Service Quality Model: An Empirical Study on Japanese Universities
ERIC Educational Resources Information Center
Sultan, Parves; Wong, Ho
2010-01-01
Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…
2013-09-01
based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D
NASA Astrophysics Data System (ADS)
Arai, Yukiko; Aoki, Hitoshi; Abe, Fumitaka; Todoroki, Shunichiro; Khatami, Ramin; Kazumi, Masaki; Totsuka, Takuya; Wang, Taifeng; Kobayashi, Haruo
2015-04-01
1/f noise is one of the most important characteristics for designing analog/RF circuits including operational amplifiers and oscillators. We have analyzed and developed a novel 1/f noise model in the strong inversion, saturation, and sub-threshold regions based on SPICE2 type model used in any public metal-oxide-semiconductor field-effect transistor (MOSFET) models developed by the University of California, Berkeley. Our model contains two noise generation mechanisms that are mobility and interface trap number fluctuations. Noise variability dependent on gate voltage is also newly implemented in our model. The proposed model has been implemented in BSIM4 model of a SPICE3 compatible circuit simulator. Parameters of the proposed model are extracted with 1/f noise measurements for simulation verifications. The simulation results show excellent agreements between measurement and simulations.
NASA Astrophysics Data System (ADS)
BoŻek, Piotr; Broniowski, Wojciech
2018-03-01
We discuss the forward-backward correlations of harmonic flow in Pb +Pb collisions at the CERN Large Hadron Collider, applying standard multibin measures as well as new measures proposed here. We illustrate the methods with hydrodynamic model simulations based on event-by-event initial conditions from the wounded quark model with asymmetric rapidity emission profiles. Within the model, we examine independently the event-plane angle and the flow magnitude decorrelations. We find a specific hierarchy between various flow decorrelation measures and confirm certain factorization relations. We find qualitative agreement of the model and the data from the ATLAS and CMS Collaborations.
Measurement-based quantum teleportation on finite AKLT chains
NASA Astrophysics Data System (ADS)
Fujii, Akihiko; Feder, David
In the measurement-based model of quantum computation, universal quantum operations are effected by making repeated local measurements on resource states which contain suitable entanglement. Resource states include two-dimensional cluster states and the ground state of the Affleck-Kennedy-Lieb-Tasaki (AKLT) state on the honeycomb lattice. Recent studies suggest that measurements on one-dimensional systems in the Haldane phase teleport perfect single-qubit gates in the correlation space, protected by the underlying symmetry. As laboratory realizations of symmetry-protected states will necessarily be finite, we investigate the potential for quantum gate teleportation in finite chains of a bilinear-biquadratic Hamiltonian which is a generalization of the AKLT model representing the full Haldane phase.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
ECO-DRIVING MODELING ENVIRONMENT
DOT National Transportation Integrated Search
2015-11-01
This research project aims to examine the eco-driving modeling capabilities of different traffic modeling tools available and to develop a driver-simulator-based eco-driving modeling tool to evaluate driver behavior and to reliably estimate or measur...
Comparing supply and demand models for future photovoltaic power generation in the USA
Basore, Paul A.; Cole, Wesley J.
2018-02-22
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Comparing supply and demand models for future photovoltaic power generation in the USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basore, Paul A.; Cole, Wesley J.
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
NASA Astrophysics Data System (ADS)
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Towards an improved LAI collection protocol via simulated field-based PAR sensing
Yao, Wei; Van Leeuwen, Martin; Romanczyk, Paul; ...
2016-07-14
In support of NASA’s next-generation spectrometer—the Hyperspectral Infrared Imager (HyspIRI)—we are working towards assessing sub-pixel vegetation structure from imaging spectroscopy data. Of particular interest is Leaf Area Index (LAI), which is an informative, yet notoriously challenging parameter to efficiently measure in situ. While photosynthetically-active radiation (PAR) sensors have been validated for measuring crop LAI, there is limited literature on the efficacy of PAR-based LAI measurement in the forest environment. This study (i) validates PAR-based LAI measurement in forest environments, and (ii) proposes a suitable collection protocol, which balances efficiency with measurement variation, e.g., due to sun flecks and various-sized canopymore » gaps. A synthetic PAR sensor model was developed in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and used to validate LAI measurement based on first-principles and explicitly-known leaf geometry. Simulated collection parameters were adjusted to empirically identify optimal collection protocols. Furthermore, these collection protocols were then validated in the field by correlating PAR-based LAI measurement to the normalized difference vegetation index (NDVI) extracted from the “classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) data (R 2 was 0.61). The results indicate that our proposed collecting protocol is suitable for measuring the LAI of sparse forest (LAI < 3–5 ( m 2/m 2)).« less
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-11-01
To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A Critical Review of Theories and Measures of Ethics-Related Leadership.
Zhu, Weichun; Zheng, Xiaoming; Riggio, Ronald E; Zhang, Xi
2015-01-01
This chapter reviews the different theoretical perspectives and measurements of ethics-related leadership models, including ethical leadership, transformational leadership, authentic leadership, servant leadership, spiritual leadership, and a virtues-based approach to leadership ethics. The similarities and differences among these theoretical models and measures to ethics-related leadership are discussed. © 2015 Wiley Periodicals, Inc., A Wiley Company.
CR Boardman; Samuel V. Glass
2015-01-01
The moisture transfer effectiveness (or latent effectiveness) of a cross-flow, membrane based energy recovery ventilator is measured and modeled. Analysis of in situ measurements for a full year shows that energy recovery ventilator latent effectiveness increases with increasing average relative humidity and surprisingly increases with decreasing average temperature. A...
Anthropometric body measurements based on multi-view stereo image reconstruction.
Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui
2013-01-01
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.
Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*
Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui
2013-01-01
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700
NASA Astrophysics Data System (ADS)
Barker, J. Burdette
Spatially informed irrigation management may improve the optimal use of water resources. Sub-field scale water balance modeling and measurement were studied in the context of irrigation management. A spatial remote-sensing-based evapotranspiration and soil water balance model was modified and validated for use in real-time irrigation management. The modeled ET compared well with eddy covariance data from eastern Nebraska. Placement and quantity of sub-field scale soil water content measurement locations was also studied. Variance reduction factor and temporal stability were used to analyze soil water content data from an eastern Nebraska field. No consistent predictor of soil water temporal stability patterns was identified. At least three monitoring locations were needed per irrigation management zone to adequately quantify the mean soil water content. The remote-sensing-based water balance model was used to manage irrigation in a field experiment. The research included an eastern Nebraska field in 2015 and 2016 and a western Nebraska field in 2016 for a total of 210 plot-years. The response of maize and soybean to irrigation using variations of the model were compared with responses from treatments using soil water content measurement and a rainfed treatment. The remote-sensing-based treatment prescribed more irrigation than the other treatments in all cases. Excessive modeled soil evaporation and insufficient drainage times were suspected causes of the model drift. Modifying evaporation and drainage reduced modeled soil water depletion error. None of the included response variables were significantly different between treatments in western Nebraska. In eastern Nebraska, treatment differences for maize and soybean included evapotranspiration and a combined variable including evapotranspiration and deep percolation. Both variables were greatest for the remote-sensing model when differences were found to be statistically significant. Differences in maize yield in 2015 were attributed to random error. Soybean yield was lowest for the remote-sensing-based treatment and greatest for rainfed, possibly because of overwatering and lodging. The model performed well considering that it did not include soil water content measurements during the season. Future work should improve the soil evaporation and drainage formulations, because of excessive precipitation and include aerial remote sensing imagery and soil water content measurement as model inputs.
Kashima, Saori; Yorifuji, Takashi; Sawada, Norie; Nakaya, Tomoki; Eboshida, Akira
2018-08-01
Typically, land use regression (LUR) models have been developed using campaign monitoring data rather than routine monitoring data. However, the latter have advantages such as low cost and long-term coverage. Based on the idea that LUR models representing regional differences in air pollution and regional road structures are optimal, the objective of this study was to evaluate the validity of LUR models for nitrogen dioxide (NO 2 ) based on routine and campaign monitoring data obtained from an urban area. We selected the city of Suita in Osaka (Japan). We built a model based on routine monitoring data obtained from all sites (routine-LUR-All), and a model based on campaign monitoring data (campaign-LUR) within the city. Models based on routine monitoring data obtained from background sites (routine-LUR-BS) and based on data obtained from roadside sites (routine-LUR-RS) were also built. The routine LUR models were based on monitoring networks across two prefectures (i.e., Osaka and Hyogo prefectures). We calculated the predictability of the each model. We then compared the predicted NO 2 concentrations from each model with measured annual average NO 2 concentrations from evaluation sites. The routine-LUR-All and routine-LUR-BS models both predicted NO 2 concentrations well: adjusted R 2 =0.68 and 0.76, respectively, and root mean square error=3.4 and 2.1ppb, respectively. The predictions from the routine-LUR-All model were highly correlated with the measured NO 2 concentrations at evaluation sites. Although the predicted NO 2 concentrations from each model were correlated, the LUR models based on routine networks, and particularly those based on all monitoring sites, provided better visual representations of the local road conditions in the city. The present study demonstrated that LUR models based on routine data could estimate local traffic-related air pollution in an urban area. The importance and usefulness of data from routine monitoring networks should be acknowledged. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.
2014-01-01
Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.
A Laser-Based Measuring System for Online Quality Control of Car Engine Block
Li, Xing-Qiang; Wang, Zhong; Fu, Lu-Hua
2016-01-01
For online quality control of car engine production, pneumatic measurement instrument plays an unshakeable role in measuring diameters inside engine block because of its portability and high-accuracy. To the limitation of its measuring principle, however, the working space between the pneumatic device and measured surface is too small to require manual operation. This lowers the measuring efficiency and becomes an obstacle to perform automatic measurement. In this article, a high-speed, automatic measuring system is proposed to take the place of pneumatic devices by using a laser-based measuring unit. The measuring unit is considered as a set of several measuring modules, where each of them acts like a single bore gauge and is made of four laser triangulation sensors (LTSs), which are installed on different positions and in opposite directions. The spatial relationship among these LTSs was calibrated before measurements. Sampling points from measured shaft holes can be collected by the measuring unit. A unified mathematical model was established for both calibration and measurement. Based on the established model, the relative pose between the measuring unit and measured workpiece does not impact the measuring accuracy. This frees the measuring unit from accurate positioning or adjustment, and makes it possible to realize fast and automatic measurement. The proposed system and method were finally validated by experiments. PMID:27834839
Spilker, Ryan L; Feinstein, Jeffrey A; Parker, David W; Reddy, V Mohan; Taylor, Charles A
2007-04-01
Patient-specific computational models could aid in planning interventions to relieve pulmonary arterial stenoses common in many forms of congenital heart disease. We describe a new approach to simulate blood flow in subject-specific models of the pulmonary arteries that consists of a numerical model of the proximal pulmonary arteries created from three-dimensional medical imaging data with terminal impedance boundary conditions derived from linear wave propagation theory applied to morphometric models of distal vessels. A tuning method, employing numerical solution methods for nonlinear systems of equations, was developed to modify the distal vasculature to match measured pressure and flow distribution data. One-dimensional blood flow equations were solved with a finite element method in image-based pulmonary arterial models using prescribed inlet flow and morphometry-based impedance at the outlets. Application of these methods in a pilot study of the effect of removal of unilateral pulmonary arterial stenosis induced in a pig showed good agreement with experimental measurements for flow redistribution and main pulmonary arterial pressure. Next, these methods were applied to a patient with repaired tetralogy of Fallot and predicted insignificant hemodynamic improvement with relief of the stenosis. This method of coupling image-based and morphometry-based models could enable increased fidelity in pulmonary hemodynamic simulation.
Liu, Y; Allen, R
2002-09-01
The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
NASA Astrophysics Data System (ADS)
Gharaibeh, M. A.; Eltaif, N. I.; Alrababah, M. A.; Alhamad, M. N.
2009-04-01
Infiltration is vital for both irrigated and rainfed agriculture. The knowledge of infiltration characteristics of a soil is the basic information required for designing an efficient irrigation system. The objective of the present study was to model soil infiltration using four models: Green and Ampt, Horton, Kostaikov and modified Kostiakov. Infiltration tests were conducted on field plot irrigated with treated, untreated greywater and fresh water. The field water infiltration data used in these models were based on double ring infiltrometer tests conducted for 4 h. The algebraic parameters of the infiltration models and nonlinear least squares regression were fitted using measured infiltration time [I (t)] data. Among process-based infiltration models, the Horton model performed best and matched the measured I (t) data with lower sum of squares (SS).
Quantitative computed tomography-based predictions of vertebral strength in anterior bending.
Buckley, Jenni M; Cheng, Liu; Loo, Kenneth; Slyfield, Craig; Xu, Zheng
2007-04-20
This study examined the ability of QCT-based structural assessment techniques to predict vertebral strength in anterior bending. The purpose of this study was to compare the abilities of QCT-based bone mineral density (BMD), mechanics of solids models (MOS), e.g., bending rigidity, and finite element analyses (FE) to predict the strength of isolated vertebral bodies under anterior bending boundary conditions. Although the relative performance of QCT-based structural measures is well established for uniform compression, the ability of these techniques to predict vertebral strength under nonuniform loading conditions has not yet been established. Thirty human thoracic vertebrae from 30 donors (T9-T10, 20 female, 10 male; 87 +/- 5 years of age) were QCT scanned and destructively tested in anterior bending using an industrial robot arm. The QCT scans were processed to generate specimen-specific FE models as well as trabecular bone mineral density (tBMD), integral bone mineral density (iBMD), and MOS measures, such as axial and bending rigidities. Vertebral strength in anterior bending was poorly to moderately predicted by QCT-based BMD and MOS measures (R2 = 0.14-0.22). QCT-based FE models were better strength predictors (R2 = 0.34-0.40); however, their predictive performance was not statistically different from MOS bending rigidity (P > 0.05). Our results suggest that the poor clinical performance of noninvasive structural measures may be due to their inability to predict vertebral strength under bending loads. While their performance was not statistically better than MOS bending rigidities, QCT-based FE models were moderate predictors of both compressive and bending loads at failure, suggesting that this technique has the potential for strength prediction under nonuniform loads. The current FE modeling strategy is insufficient, however, and significant modifications must be made to better mimic whole bone elastic and inelastic material behavior.
Laser-based Relative Navigation Using GPS Measurements for Spacecraft Formation Flying
NASA Astrophysics Data System (ADS)
Lee, Kwangwon; Oh, Hyungjik; Park, Han-Earl; Park, Sang-Young; Park, Chandeok
2015-12-01
This study presents a precise relative navigation algorithm using both laser and Global Positioning System (GPS) measurements in real time. The measurement model of the navigation algorithm between two spacecraft is comprised of relative distances measured by laser instruments and single differences of GPS pseudo-range measurements in spherical coordinates. Based on the measurement model, the Extended Kalman Filter (EKF) is applied to smooth the pseudo-range measurements and to obtain the relative navigation solution. While the navigation algorithm using only laser measurements might become inaccurate because of the limited accuracy of spacecraft attitude estimation when the distance between spacecraft is rather large, the proposed approach is able to provide an accurate solution even in such cases by employing the smoothed GPS pseudo-range measurements. Numerical simulations demonstrate that the errors of the proposed algorithm are reduced by more than about 12% compared to those of an algorithm using only laser measurements, as the accuracy of angular measurements is greater than 0.001° at relative distances greater than 30 km.
Modeling the Information Age Combat Model: An Agent-Based Simulation of Network Centric Operations
NASA Technical Reports Server (NTRS)
Deller, Sean; Rabadi, Ghaith A.; Bell, Michael I.; Bowling, Shannon R.; Tolk, Andreas
2010-01-01
The Information Age Combat Model (IACM) was introduced by Cares in 2005 to contribute to the development of an understanding of the influence of connectivity on force effectiveness that can eventually lead to quantitative prediction and guidelines for design and employment. The structure of the IACM makes it clear that the Perron-Frobenius Eigenvalue is a quantifiable metric with which to measure the organization of a networked force. The results of recent experiments presented in Deller, et aI., (2009) indicate that the value of the Perron-Frobenius Eigenvalue is a significant measurement of the performance of an Information Age combat force. This was accomplished through the innovative use of an agent-based simulation to model the IACM and represents an initial contribution towards a new generation of combat models that are net-centric instead of using the current platform-centric approach. This paper describes the intent, challenges, design, and initial results of this agent-based simulation model.
NASA Astrophysics Data System (ADS)
Appel, W.; Gilliam, R. C.; Pouliot, G. A.; Godowitch, J. M.; Pleim, J.; Hogrefe, C.; Kang, D.; Roselle, S. J.; Mathur, R.
2013-12-01
The DISCOVER-AQ project (Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality), is a joint collaboration between NASA, U.S. EPA and a number of other local organizations with the goal of characterizing air quality in urban areas using satellite, aircraft, vertical profiler and ground based measurements (http://discover-aq.larc.nasa.gov). In July 2011, the DISCOVER-AQ project conducted intensive air quality measurements in the Baltimore, MD and Washington, D.C. area in the eastern U.S. To take advantage of these unique data, the Community Multiscale Air Quality (CMAQ) model, coupled with the Weather Research and Forecasting (WRF) model is used to simulate the meteorology and air quality in the same region using 12-km, 4-km and 1-km horizontal grid spacings. The goal of the modeling exercise is to demonstrate the capability of the coupled WRF-CMAQ modeling system to simulate air quality at fine grid spacings in an urban area. Development of new data assimilation techniques and the use of higher resolution input data for the WRF model have been implemented to improve the meteorological results, particularly at the 4-km and 1-km grid resolutions. In addition, a number of updates to the CMAQ model were made to enhance the capability of the modeling system to accurately represent the magnitude and spatial distribution of pollutants at fine model resolutions. Data collected during the 2011 DISCOVER-AQ campaign, which include aircraft transects and spirals, ship measurements in the Chesapeake Bay, ozonesondes, tethered balloon measurements, DRAGON aerosol optical depth measurements, LIDAR measurements, and intensive ground-based site measurements, are used to evaluate results from the WRF-CMAQ modeling system for July 2011 at the three model grid resolutions. The results of the comparisons of the model results to these measurements will be presented, along with results from the various sensitivity simulations examining the impact the various updates to the modeling system have on the model estimates.
A Critical Look at Entropy-Based Gene-Gene Interaction Measures.
Lee, Woojoo; Sjölander, Arvid; Pawitan, Yudi
2016-07-01
Several entropy-based measures for detecting gene-gene interaction have been proposed recently. It has been argued that the entropy-based measures are preferred because entropy can better capture the nonlinear relationships between genotypes and traits, so they can be useful to detect gene-gene interactions for complex diseases. These suggested measures look reasonable at intuitive level, but so far there has been no detailed characterization of the interactions captured by them. Here we study analytically the properties of some entropy-based measures for detecting gene-gene interactions in detail. The relationship between interactions captured by the entropy-based measures and those of logistic regression models is clarified. In general we find that the entropy-based measures can suffer from a lack of specificity in terms of target parameters, i.e., they can detect uninteresting signals as interactions. Numerical studies are carried out to confirm theoretical findings. © 2016 WILEY PERIODICALS, INC.
ERIC Educational Resources Information Center
Sins, Patrick H. M.; van Joolingen, Wouter R.; Savelsbergh, Elwin R.; van Hout-Wolters, Bernadette
2008-01-01
Purpose of the present study was to test a conceptual model of relations among achievement goal orientation, self-efficacy, cognitive processing, and achievement of students working within a particular collaborative task context. The task involved a collaborative computer-based modeling task. In order to test the model, group measures of…
Minimizing Concentration Effects in Water-Based, Laminar-Flow Condensation Particle Counters
Lewis, Gregory S.; Hering, Susanne V.
2013-01-01
Concentration effects in water condensation systems, such as used in the water-based condensation particle counter, are explored through numeric modeling and direct measurements. Modeling shows that the condensation heat release and vapor depletion associated with particle activation and growth lowers the peak supersaturation. At higher number concentrations, the diameter of the droplets formed is smaller, and the threshold particle size for activation is higher. This occurs in both cylindrical and parallel plate geometries. For water-based systems we find that condensational heat release is more important than is vapor depletion. We also find that concentration effects can be minimized through use of smaller tube diameters, or more closely spaced parallel plates. Experimental measurements of droplet diameter confirm modeling results. PMID:24436507
Status of Air Quality in Central California and Needs for Further Study
NASA Astrophysics Data System (ADS)
Tanrikulu, S.; Beaver, S.; Soong, S.; Tran, C.; Jia, Y.; Matsuoka, J.; McNider, R. T.; Biazar, A. P.; Palazoglu, A.; Lee, P.; Wang, J.; Kang, D.; Aneja, V. P.
2012-12-01
Ozone and PM2.5 levels frequently exceed NAAQS in central California (CC). Additional emission reductions are needed to attain and maintain the standards there. Agencies are developing cost-effective emission control strategies along with complementary incentive programs to reduce emissions when exceedances are forecasted. These approaches require accurate modeling and forecasting capabilities. A variety of models have been rigorously applied (MM5, WRF, CMAQ, CAMx) over CC. Despite the vast amount of land-based measurements from special field programs and significant effort, models have historically exhibited marginal performance. Satellite data may improve model performance by: establishing IC/BC over outlying areas of the modeling domain having unknown conditions; enabling FDDA over the Pacific Ocean to characterize important marine inflows and pollutant outflows; and filling in the gaps of the land-based monitoring network. BAAQMD, in collaboration with the NASA AQAST, plans to conduct four studies that include satellite-based data in CC air quality analysis and modeling: The first project enhances and refines weather patterns, especially aloft, impacting summer ozone formation. Surface analyses were unable to characterize the strong attenuating effect of the complex terrain to steer marine winds impinging on the continent. The dense summer clouds and fog over the Pacific Ocean form spatial patterns that can be related to the downstream air flows through polluted areas. The goal of this project is to explore, characterize, and quantify these relationships using cloud cover data. Specifically, cloud agreement statistics will be developed using satellite data and model clouds. Model skin temperature predictions will be compared to both MODIS and GOES skin temperatures. The second project evaluates and improves the initial and simulated fields of meteorological models that provide inputs to air quality models. The study will attempt to determine whether a cloud dynamical adjustment developed by UAHuntsville can improve model performance for maritime stratus and whether a moisture adjustment scheme in the Pleim-Xiu boundary layer scheme can use satellite data in place of coarse surface air temperature measurements. The goal is to improve meteorological model performance that leads to improved air quality model performance. The third project evaluates and improves forecasting skills of the National Air Quality Forecasting Model in CC by using land-based routine measurements as well as satellite data. Local forecasts are mostly based on surface meteorological and air quality measurements and weather charts provided by NWS. The goal is to improve the average accuracy in forecasting exceedances, which is around 60%. The fourth project uses satellite data for monitoring trends in fine particulate matter (PM2.5) in the San Francisco Bay Area. It evaluates the effectiveness of a rule adopted in 2008 that restricts household wood burning on days forecasted to have high PM2.5 levels. The goal is to complement current analyses based on surface data covering the largest sub-regions and population centers. The overall goal is to use satellite data to overcome limitations of land-based measurements. The outcomes will be further conceptual understanding of pollutant formation, improved regulatory model performance, and better optimized forecasting programs.
Predictive Modeling of Fast-Curing Thermosets in Nozzle-Based Extrusion
NASA Technical Reports Server (NTRS)
Xie, Jingjin; Randolph, Robert; Simmons, Gary; Hull, Patrick V.; Mazzeo, Aaron D.
2017-01-01
This work presents an approach to modeling the dynamic spreading and curing behavior of thermosets in nozzle-based extrusions. Thermosets cover a wide range of materials, some of which permit low-temperature processing with subsequent high-temperature and high-strength working properties. Extruding thermosets may overcome the limited working temperatures and strengths of conventional thermoplastic materials used in additive manufacturing. This project aims to produce technology for the fabrication of thermoset-based structures leveraging advances made in nozzle-based extrusion, such as fused deposition modeling (FDM), material jetting, and direct writing. Understanding the synergistic interactions between spreading and fast curing of extruded thermosetting materials will provide essential insights for applications that require accurate dimensional controls, such as additive manufacturing [1], [2] and centrifugal coating/forming [3]. Two types of thermally curing thermosets -- one being a soft silicone (Ecoflex 0050) and the other being a toughened epoxy (G/Flex) -- served as the test materials in this work to obtain models for cure kinetics and viscosity. The developed models align with extensive measurements made with differential scanning calorimetry (DSC) and rheology. DSC monitors the change in the heat of reaction, which reflects the rate and degree of cure at different crosslinking stages. Rheology measures the change in complex viscosity, shear moduli, yield stress, and other properties dictated by chemical composition. By combining DSC and rheological measurements, it is possible to establish a set of models profiling the cure kinetics and chemorheology without prior knowledge of chemical composition, which is usually necessary for sophisticated mechanistic modeling. In this work, we conducted both isothermal and dynamic measurements with both DSC and rheology. With the developed models, numerical simulations yielded predictions of diameter and height of droplets, along with width and height of extruded lines cured at varied temperatures. Experimental results carried out on a goniometric platform and a nozzle-based 3D printer showed agreement with the numerical simulations. Finally, this presentation will show how the models are adaptable to the planning of tool paths and designs in additive manufacturing.
USDA-ARS?s Scientific Manuscript database
Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...
Advances in soil erosion research: processes, measurement, and modeling
USDA-ARS?s Scientific Manuscript database
Soil erosion by the environmental agents of water and wind is a continuing global menace that threatens the agricultural base that sustains our civilization. Members of ASABE have been at the forefront of research to understand erosion processes, measure erosion and related processes, and model very...
NASA Astrophysics Data System (ADS)
Song, H. S.; Li, M.; Qian, W.; Song, X.; Chen, X.; Scheibe, T. D.; Fredrickson, J.; Zachara, J. M.; Liu, C.
2016-12-01
Modeling environmental microbial communities at individual organism level is currently intractable due to overwhelming structural complexity. Functional guild-based approaches alleviate this problem by lumping microorganisms into fewer groups based on their functional similarities. This reduction may become ineffective, however, when individual species perform multiple functions as environmental conditions vary. In contrast, the functional enzyme-based modeling approach we present here describes microbial community dynamics based on identified functional enzymes (rather than individual species or their groups). Previous studies in the literature along this line used biomass or functional genes as surrogate measures of enzymes due to the lack of analytical methods for quantifying enzymes in environmental samples. Leveraging our recent development of a signature peptide-based technique enabling sensitive quantification of functional enzymes in environmental samples, we developed a genetically structured microbial community model (GSMCM) to incorporate enzyme concentrations and various other omics measurements (if available) as key modeling input. We formulated the GSMCM based on the cybernetic metabolic modeling framework to rationally account for cellular regulation without relying on empirical inhibition kinetics. In the case study of modeling denitrification process in Columbia River hyporheic zone sediments collected from the Hanford Reach, our GSMCM provided a quantitative fit to complex experimental data in denitrification, including the delayed response of enzyme activation to the change in substrate concentration. Our future goal is to extend the modeling scope to the prediction of carbon and nitrogen cycles and contaminant fate. Integration of a simpler version of the GSMCM with PFLOTRAN for multi-scale field simulations is in progress.
A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application
Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang
2018-01-01
Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549
Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei
2018-04-01
The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.
Pimperl, A; Schreyögg, J; Rothgang, H; Busse, R; Glaeske, G; Hildebrandt, H
2015-12-01
Transparency of economic performance of integrated care systems (IV) is a basic requirement for the acceptance and further development of integrated care. Diverse evaluation methods are used but are seldom openly discussed because of the proprietary nature of the different business models. The aim of this article is to develop a generic model for measuring economic performance of IV interventions. A catalogue of five quality criteria is used to discuss different evaluation methods -(uncontrolled before-after-studies, control group-based approaches, regression models). On this -basis a best practice model is proposed. A regression model based on the German morbidity-based risk structure equalisation scheme (MorbiRSA) has some benefits in comparison to the other methods mentioned. In particular it requires less resources to be implemented and offers advantages concerning the relia-bility and the transparency of the method (=important for acceptance). Also validity is sound. Although RCTs and - also to a lesser -extent - complex difference-in-difference matching approaches can lead to a higher validity of the results, their feasibility in real life settings is limited due to economic and practical reasons. That is why central criticisms of a MorbiRSA-based model were addressed, adaptions proposed and incorporated in a best practice model: Population-oriented morbidity adjusted margin improvement model (P-DBV(MRSA)). The P-DBV(MRSA) approach may be used as a standardised best practice model for the economic evaluation of IV. Parallel to the proposed approach for measuring economic performance a balanced, quality-oriented performance measurement system should be introduced. This should prevent incentivising IV-players to undertake short-term cost cutting at the expense of quality. © Georg Thieme Verlag KG Stuttgart · New York.
A comparison of arcjet plume properties to model predictions
NASA Technical Reports Server (NTRS)
Cappelli, M. A.; Liebeskind, J. G.; Hanson, R. K.; Butler, G. W.; King, D. Q.
1993-01-01
This paper describes an experimental study of the plasma plume properties of a 1 kW class hydrogen arcjet thruster and the comparison of measured temperature and velocity field to model predictions. The experiments are based on laser-induced fluorescence excitation of the Balmer-alpha transition. The model is based on a single-fluid magnetohydrodynamic description of the flow originally developed to predict arcjet thruster performance. Excellent agreement between model predictions and experimental velocity is found, despite the complex nature of the flow. Measured and predicted exit plane temperatures are in disagreement by as much as 2000K over a range of operating conditions. The possible sources for this discrepancy are discussed.