Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
An improved switching converter model using discrete and average techniques
NASA Technical Reports Server (NTRS)
Shortt, D. J.; Lee, F. C.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
An improved switching converter model. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Shortt, D. J.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
NASA Technical Reports Server (NTRS)
Wiswell, E. R.; Cooper, G. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.
The B-dot Earth Average Magnetic Field
NASA Technical Reports Server (NTRS)
Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon
2013-01-01
The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Variable diffusion in stock market fluctuations
NASA Astrophysics Data System (ADS)
Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.
2015-02-01
We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
NASA Astrophysics Data System (ADS)
Nagpal, Shaina; Gupta, Amit
2017-08-01
Free Space Optics (FSO) link exploits the tremendous network capacity and is capable of offering wireless communications similar to communications through optical fibres. However, FSO link is extremely weather dependent and the major effect on FSO links is due to adverse weather conditions like fog and snow. In this paper, an FSO link is designed using an array of receivers. The disparity of the link for very high attenuation conditions due to fog and snow is analysed using aperture averaging technique. Further effect of aperture averaging technique is investigated by comparing the systems using aperture averaging technique with systems not using aperture averaging technique. The performance of proposed model of FSO link has been evaluated in terms of Q factor, bit error rate (BER) and eye diagram.
NASA Astrophysics Data System (ADS)
Braman, Kalen; Raman, Venkat
2011-11-01
A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.
Post-Modeling Histogram Matching of Maps Produced Using Regression Trees
Andrew J. Lister; Tonya W. Lister
2006-01-01
Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...
NASA Astrophysics Data System (ADS)
Pinem, M.; Fauzi, R.
2018-02-01
One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework
NASA Astrophysics Data System (ADS)
Achieng, K. O.; Zhu, J.
2017-12-01
There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?
NASA Astrophysics Data System (ADS)
Beger, Richard D.; Buzatu, Dan A.; Wilkes, Jon G.
2002-10-01
A three-dimensional quantitative spectrometric data-activity relationship (3D-QSDAR) modeling technique which uses NMR spectral and structural information that is combined in a 3D-connectivity matrix has been developed. A 3D-connectivity matrix was built by displaying all possible assigned carbon NMR chemical shifts, carbon-to-carbon connections, and distances between the carbons. Two-dimensional 13C-13C COSY and 2D slices from the distance dimension of the 3D-connectivity matrix were used to produce a relationship among the 2D spectral patterns for polychlorinated dibenzofurans, dibenzodioxins, and biphenyls (PCDFs, PCDDs, and PCBs respectively) binding to the aryl hydrocarbon receptor (AhR). We refer to this technique as comparative structural connectivity spectral analysis (CoSCoSA) modeling. All CoSCoSA models were developed using forward multiple linear regression analysis of the predicted 13C NMR structure-connectivity spectral bins. A CoSCoSA model for 26 PCDFs had an explained variance (r2) of 0.93 and an average leave-four-out cross-validated variance (q4 2) of 0.89. A CoSCoSA model for 14 PCDDs produced an r2 of 0.90 and an average leave-two-out cross-validated variance (q2 2) of 0.79. One CoSCoSA model for 12 PCBs gave an r2 of 0.91 and an average q2 2 of 0.80. Another CoSCoSA model for all 52 compounds had an r2 of 0.85 and an average q4 2 of 0.52. Major benefits of CoSCoSA modeling include ease of development since the technique does not use molecular docking routines.
Cycle-averaged dynamics of a periodically driven, closed-loop circulation model
NASA Technical Reports Server (NTRS)
Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.
2005-01-01
Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.
Accounting for uncertainty in health economic decision models by using model averaging
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-01-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
ERIC Educational Resources Information Center
Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael
2011-01-01
This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…
Modeling particle number concentrations along Interstate 10 in El Paso, Texas
Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias
2014-01-01
Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alessi, David A.; Rosso, Paul A.; Nguyen, Hoang T.
Laser energy absorption and subsequent heat removal from diffraction gratings in chirped pulse compressors poses a significant challenge in high repetition rate, high peak power laser development. In order to understand the average power limitations, we have modeled the time-resolved thermo-mechanical properties of current and advanced diffraction gratings. We have also developed and demonstrated a technique of actively cooling Petawatt scale, gold compressor gratings to operate at 600W of average power - a 15x increase over the highest average power petawatt laser currently in operation. As a result, combining this technique with low absorption multilayer dielectric gratings developed in ourmore » group would enable pulse compressors for petawatt peak power lasers operating at average powers well above 40kW.« less
Alessi, David A.; Rosso, Paul A.; Nguyen, Hoang T.; ...
2016-12-26
Laser energy absorption and subsequent heat removal from diffraction gratings in chirped pulse compressors poses a significant challenge in high repetition rate, high peak power laser development. In order to understand the average power limitations, we have modeled the time-resolved thermo-mechanical properties of current and advanced diffraction gratings. We have also developed and demonstrated a technique of actively cooling Petawatt scale, gold compressor gratings to operate at 600W of average power - a 15x increase over the highest average power petawatt laser currently in operation. As a result, combining this technique with low absorption multilayer dielectric gratings developed in ourmore » group would enable pulse compressors for petawatt peak power lasers operating at average powers well above 40kW.« less
Plasma properties in electron-bombardment ion thrusters
NASA Technical Reports Server (NTRS)
Matossian, J. N.; Beattie, J. R.
1987-01-01
The paper describes a technique for computing volume-averaged plasma properties within electron-bombardment ion thrusters, using spatially varying Langmuir-probe measurements. Average values of the electron densities are defined by integrating the spatially varying Maxwellian and primary electron densities over the ionization volume, and then dividing by the volume. Plasma properties obtained in the 30-cm-diameter J-series and ring-cusp thrusters are analyzed by the volume-averaging technique. The superior performance exhibited by the ring-cusp thruster is correlated with a higher average Maxwellian electron temperature. The ring-cusp thruster maintains the same fraction of primary electrons as does the J-series thruster, but at a much lower ion production cost. The volume-averaged predictions for both thrusters are compared with those of a detailed thruster performance model.
NASA Technical Reports Server (NTRS)
Dalling, D. K.; Bailey, B. K.; Pugmire, R. J.
1984-01-01
A proton and carbon-13 nuclear magnetic resonance (NMR) study was conducted of Ashland shale oil refinery products, experimental referee broadened-specification jet fuels, and of related isoprenoid model compounds. Supercritical fluid chromatography techniques using carbon dioxide were developed on a preparative scale, so that samples could be quantitatively separated into saturates and aromatic fractions for study by NMR. An optimized average parameter treatment was developed, and the NMR results were analyzed in terms of the resulting average parameters; formulation of model mixtures was demonstrated. Application of novel spectroscopic techniques to fuel samples was investigated.
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
Testing averaged cosmology with type Ia supernovae and BAO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, B.; Alcaniz, J.S.; Coley, A.A.
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less
NASA Astrophysics Data System (ADS)
Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.
2011-04-01
ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.
NASA Astrophysics Data System (ADS)
Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.
2012-03-01
ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.
Measured values of coal mine stopping resistance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oswald, N.; Prosser, B.; Ruckman, R.
2008-12-15
As coal mines become larger, the number of stoppings in the ventilation system increases. Each stopping represents a potential leakage path which must be adequately represented in the ventilation model. Stopping resistance can be calculated using two methods, the USBM method, used to determine a resistance for a single stopping, and the MVS technique, in which an average resistance is calculated for multiple stoppings. Through MVS data collected from ventilation surveys of different subsurface coal mines, average resistances for stoppings were determined for stopping in poor, average, good, and excellent conditions. The calculated average stoppings resistance were determined for concretemore » block and Kennedy stopping. Using the average stopping resistance, measured and calculated using the MVS method, provides a ventilation modeling tool which can be used to construct more accurate and useful ventilation models. 3 refs., 3 figs.« less
NASA Technical Reports Server (NTRS)
Harrison, Phil; LaVerde, Bruce; Teague, David
2009-01-01
Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.
Age-dependence of the average and equivalent refractive indices of the crystalline lens
Charman, W. Neil; Atchison, David A.
2013-01-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Estimating Natural Recharge in a Desert Environment Facing Increasing Ground-Water Demands
NASA Astrophysics Data System (ADS)
Nishikawa, T.; Izbicki, J. A.; Hevesi, J. A.; Martin, P.
2004-12-01
Ground water historically has been the sole source of water supply for the community of Joshua Tree in the Joshua Tree ground-water subbasin of the Morongo ground-water basin in the southern Mojave Desert. Joshua Basin Water District (JBWD) supplies water to the community from the underlying Joshua Tree ground-water subbasin, and ground-water withdrawals averaging about 960 acre-ft/yr have resulted in as much as 35 ft of drawdown. As growth continues in the desert, ground-water resources may need to be supplemented using imported water. To help meet future demands, JBWD plans to construct production wells in the adjacent Copper Mountain ground-water subbasin. To manage the ground-water resources and to identify future mitigating measures, a thorough understanding of the ground-water system is needed. To this end, field and numerical techniques were applied to determine the distribution and quantity of natural recharge. Field techniques included the installation of instrumented boreholes in selected washes and at a nearby control site. Numerical techniques included the use of a distributed-parameter watershed model and a ground-water flow model. The results from the field techniques indicated that as much as 70 acre-ft/yr of water infiltrated downward through the two principal washes during the study period (2001-3). The results from the watershed model indicated that the average annual recharge in the ground-water subbasins is about 160 acre-ft/yr. The results from the calibrated ground-water flow model indicated that the average annual recharge for the same area is about 125 acre-ft/yr. Although the field and numerical techniques were applied to different scales (local vs. large), all indicate that natural recharge in the Joshua Tree area is very limited; therefore, careful management of the limited ground-water resources is needed. Moreover, the calibrated model can now be used to estimate the effects of different water-management strategies on the ground-water subbasins.
Some nonlinear damping models in flexible structures
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1988-01-01
A class of nonlinear damping models is introduced with application to flexible flight structures characterized by low damping. Approximate solutions of engineering interest are obtained for the model using the classical averaging technique of Krylov and Bogoliubov. The results should be considered preliminary pending further investigation.
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
2018-01-01
This paper measures the adhesion/cohesion force among asphalt molecules at nanoscale level using an Atomic Force Microscopy (AFM) and models the moisture damage by applying state-of-the-art Computational Intelligence (CI) techniques (e.g., artificial neural network (ANN), support vector regression (SVR), and an Adaptive Neuro Fuzzy Inference System (ANFIS)). Various combinations of lime and chemicals as well as dry and wet environments are used to produce different asphalt samples. The parameters that were varied to generate different asphalt samples and measure the corresponding adhesion/cohesion forces are percentage of antistripping agents (e.g., Lime and Unichem), AFM tips K values, and AFM tip types. The CI methods are trained to model the adhesion/cohesion forces given the variation in values of the above parameters. To achieve enhanced performance, the statistical methods such as average, weighted average, and regression of the outputs generated by the CI techniques are used. The experimental results show that, of the three individual CI methods, ANN can model moisture damage to lime- and chemically modified asphalt better than the other two CI techniques for both wet and dry conditions. Moreover, the ensemble of CI along with statistical measurement provides better accuracy than any of the individual CI techniques. PMID:29849551
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Murray, Louise; Mason, Joshua; Henry, Ann M; Hoskin, Peter; Siebert, Frank-Andre; Venselaar, Jack; Bownes, Peter
2016-08-01
To estimate the risks of radiation-induced rectal and bladder cancers following low dose rate (LDR) and high dose rate (HDR) brachytherapy as monotherapy for localised prostate cancer and compare to external beam radiotherapy techniques. LDR and HDR brachytherapy monotherapy plans were generated for three prostate CT datasets. Second cancer risks were assessed using Schneider's concept of organ equivalent dose. LDR risks were assessed according to a mechanistic model and a bell-shaped model. HDR risks were assessed according to a bell-shaped model. Relative risks and excess absolute risks were estimated and compared to external beam techniques. Excess absolute risks of second rectal or bladder cancer were low for both LDR (irrespective of the model used for calculation) and HDR techniques. Average excess absolute risks of rectal cancer for LDR brachytherapy according to the mechanistic model were 0.71 per 10,000 person-years (PY) and 0.84 per 10,000 PY respectively, and according to the bell-shaped model, were 0.47 and 0.78 per 10,000 PY respectively. For HDR, the average excess absolute risks for second rectal and bladder cancers were 0.74 and 1.62 per 10,000 PY respectively. The absolute differences between techniques were very low and clinically irrelevant. Compared to external beam prostate radiotherapy techniques, LDR and HDR brachytherapy resulted in the lowest risks of second rectal and bladder cancer. This study shows both LDR and HDR brachytherapy monotherapy result in low estimated risks of radiation-induced rectal and bladder cancer. LDR resulted in lower bladder cancer risks than HDR, and lower or similar risks of rectal cancer. In absolute terms these differences between techniques were very small. Compared to external beam techniques, second rectal and bladder cancer risks were lowest for brachytherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pabon, Peter; Ternström, Sten; Lamarche, Anick
2011-06-01
To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.
NASA Technical Reports Server (NTRS)
Nese, Jon M.
1989-01-01
A dynamical systems approach is used to quantify the instantaneous and time-averaged predictability of a low-order moist general circulation model. Specifically, the effects on predictability of incorporating an active ocean circulation, implementing annual solar forcing, and asynchronously coupling the ocean and atmosphere are evaluated. The predictability and structure of the model attractors is compared using the Lyapunov exponents, the local divergence rates, and the correlation, fractal, and Lyapunov dimensions. The Lyapunov exponents measure the average rate of growth of small perturbations on an attractor, while the local divergence rates quantify phase-spatial variations of predictability. These local rates are exploited to efficiently identify and distinguish subtle differences in predictability among attractors. In addition, the predictability of monthly averaged and yearly averaged states is investigated by using attractor reconstruction techniques.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.
2011-09-01
Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Kaemming, Thomas A.
2012-01-01
A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.
Bayesian inversion of refraction seismic traveltime data
NASA Astrophysics Data System (ADS)
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...
Forecasting Techniques and Library Circulation Operations: Implications for Management.
ERIC Educational Resources Information Center
Ahiakwo, Okechukwu N.
1988-01-01
Causal regression and time series models were developed using six years of data for home borrowing, average readership, and books consulted at a university library. The models were tested for efficacy in producing short-term planning and control data. Combined models were tested in establishing evaluation measures. (10 references) (Author/MES)
Statistical Inference of a RANS closure for a Jet-in-Crossflow simulation
NASA Astrophysics Data System (ADS)
Heyse, Jan; Edeling, Wouter; Iaccarino, Gianluca
2016-11-01
The jet-in-crossflow is found in several engineering applications, such as discrete film cooling for turbine blades, where a coolant injected through hols in the blade's surface protects the component from the hot gases leaving the combustion chamber. Experimental measurements using MRI techniques have been completed for a single hole injection into a turbulent crossflow, providing full 3D averaged velocity field. For such flows of engineering interest, Reynolds-Averaged Navier-Stokes (RANS) turbulence closure models are often the only viable computational option. However, RANS models are known to provide poor predictions in the region close to the injection point. Since these models are calibrated on simple canonical flow problems, the obtained closure coefficient estimates are unlikely to extrapolate well to more complex flows. We will therefore calibrate the parameters of a RANS model using statistical inference techniques informed by the experimental jet-in-crossflow data. The obtained probabilistic parameter estimates can in turn be used to compute flow fields with quantified uncertainty. Stanford Graduate Fellowship in Science and Engineering.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
Incorporating interfacial phenomena in solidification models
NASA Technical Reports Server (NTRS)
Beckermann, Christoph; Wang, Chao Yang
1994-01-01
A general methodology is available for the incorporation of microscopic interfacial phenomena in macroscopic solidification models that include diffusion and convection. The method is derived from a formal averaging procedure and a multiphase approach, and relies on the presence of interfacial integrals in the macroscopic transport equations. In a wider engineering context, these techniques are not new, but their application in the analysis and modeling of solidification processes has largely been overlooked. This article describes the techniques and demonstrates their utility in two examples in which microscopic interfacial phenomena are of great importance.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
A novel CT acquisition and analysis technique for breathing motion modeling
NASA Astrophysics Data System (ADS)
Low, Daniel A.; White, Benjamin M.; Lee, Percy P.; Thomas, David H.; Gaudio, Sergio; Jani, Shyam S.; Wu, Xiao; Lamb, James M.
2013-06-01
To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques.
ERIC Educational Resources Information Center
Ziomek, Robert L.; Wright, Benjamin D.
Techniques such as the norm-referenced and average score techniques, commonly used in the identification of educationally disadvantaged students, are critiqued. This study applied latent trait theory, specifically the Rasch Model, along with teacher judgments relative to the mastery of instructional/test decisions, to derive a standard setting…
Jackson, Rachel W; Collins, Steven H
2015-09-01
Techniques proposed for assisting locomotion with exoskeletons have often included a combination of active work input and passive torque support, but the physiological effects of different assistance techniques remain unclear. We performed an experiment to study the independent effects of net exoskeleton work and average exoskeleton torque on human locomotion. Subjects wore a unilateral ankle exoskeleton and walked on a treadmill at 1.25 m·s(-1) while net exoskeleton work rate was systematically varied from -0.054 to 0.25 J·kg(-1)·s(-1), with constant (0.12 N·m·kg(-1)) average exoskeleton torque, and while average exoskeleton torque was systematically varied from approximately zero to 0.18 N·m·kg(-1), with approximately zero net exoskeleton work. We measured metabolic rate, center-of-mass mechanics, joint mechanics, and muscle activity. Both techniques reduced effort-related measures at the assisted ankle, but this form of work input reduced metabolic cost (-17% with maximum net work input) while this form of torque support increased metabolic cost (+13% with maximum average torque). Disparate effects on metabolic rate seem to be due to cascading effects on whole body coordination, particularly related to assisted ankle muscle dynamics and the effects of trailing ankle behavior on leading leg mechanics during double support. It would be difficult to predict these results using simple walking models without muscles or musculoskeletal models that assume fixed kinematics or kinetics. Data from this experiment can be used to improve predictive models of human neuromuscular adaptation and guide the design of assistive devices. Copyright © 2015 the American Physiological Society.
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...
Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study
Chen, Jie; Gutmark, Ephraim
2013-01-01
Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
The Mathematical Analysis of Style: A Correlation-Based Approach.
ERIC Educational Resources Information Center
Oppenheim, Rosa
1988-01-01
Examines mathematical models of style analysis, focusing on the pattern in which literary characteristics occur. Describes an autoregressive integrated moving average model (ARIMA) for predicting sentence length in different works by the same author and comparable works by different authors. This technique is valuable in characterizing stylistic…
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
NASA Astrophysics Data System (ADS)
Ausloos, Marcel; Vandewalle, Nicolas; Ivanova, Kristinka
Specialized topics on financial data analysis from a numerical and physical point of view are discussed when pertaining to the analysis of coherent and random sequences in financial fluctuations within (i) the extended detrended fluctuation analysis method, (ii) multi-affine analysis technique, (iii) mobile average intersection rules and distributions, (iv) sandpile avalanches models for crash prediction, (v) the (m,k)-Zipf method and (vi) the i-variability diagram technique for sorting out short range correlations. The most baffling result that needs further thought from mathematicians and physicists is recalled: the crossing of two mobile averages is an original method for measuring the "signal" roughness exponent, but why it is so is not understood up to now.
Sim, Jae-Ang; Kim, Jong-Min; Lee, Sahnghoon; Bae, Ji-Yong; Seon, Jong-Keun
2017-04-01
Although trans-portal and outside-in techniques are commonly used for anatomical ACL reconstruction, there is very little information on variability in tunnel placement between two techniques. A total of 103 patients who received ACL reconstruction using trans-portal (50 patients) and outside-in techniques (53 patients) were included in the study. The ACL tunnel location, length and graft-femoral tunnel angle were analyzed using the 3D CT knee models, and we compared the location and length of the femoral and tibial tunnels, and graft bending angle between the two techniques. The variability in each technique regarding the tunnel location, length and graft tunnel angle using the range values was also compared. There were no differences in the average of femoral tunnel depth and height between the two groups. The ranges of femoral tunnel depth and height showed no difference between two groups (36 and 41 % in trans-portal technique vs. 32 and 41 % in outside-in technique). The average value and ranges of tibial tunnel location also showed similar results in two groups. The outside-in technique showed longer femoral tunnel than the trans-portal technique (34.0 vs. 36.8 mm, p = 0.001). The range of femoral tunnel was also wider in trans-portal technique than in outside-in technique. Although the outside-in technique showed significant acute graft bending angle than trans-portal technique in average values, the trans-portal technique showed wider ranges in graft bending angle than outside-in technique [ranges 73° (SD 13.6) vs. 53° (SD 10.7), respectively]. Although both trans-portal and outside-in techniques in ACL reconstruction can provide relatively consistent in femoral and tibial tunnel locations, trans-portal technique showed high variability in femoral tunnel length and graft bending angles than outside-in technique. Therefore, the outside-in technique in ACL reconstruction is considered as the effective method for surgeons to make more consistent femoral tunnel. III.
Neurosurgical endoscopic training via a realistic 3-dimensional model with pathology.
Waran, Vicknes; Narayanan, Vairavan; Karuppiah, Ravindran; Thambynayagam, Hari Chandran; Muthusamy, Kalai Arasu; Rahman, Zainal Ariff Abdul; Kirollos, Ramez Wadie
2015-02-01
Training in intraventricular endoscopy is particularly challenging because the volume of cases is relatively small and the techniques involved are unlike those usually used in conventional neurosurgery. Present training models are inadequate for various reasons. Using 3-dimensional (3D) printing techniques, models with pathology can be created using actual patient's imaging data. This technical article introduces a new training model based on a patient with hydrocephalus secondary to a pineal tumour, enabling the models to be used to simulate third ventriculostomies and pineal biopsies. Multiple models of the head of a patient with hydrocephalus were created using 3D rapid prototyping technique. These models were modified to allow for a fluid-filled ventricular system under appropriate tension. The models were qualitatively assessed in the various steps involved in an endoscopic third ventriculostomy and intraventricular biopsy procedure, initially by 3 independent neurosurgeons and subsequently by 12 participants of an intraventricular endoscopy workshop. All 3 surgeons agreed on the ease and usefulness of these models in the teaching of endoscopic third ventriculostomy, performing endoscopic biopsies, and the integration of navigation to ventriculoscopy. Their overall score for the ventricular model realism was above average. The 12 participants of the intraventricular endoscopy workshop averaged between a score of 4.0 to 4.6 of 5 for every individual step of the procedure. Neurosurgical endoscopic training currently is a long process of stepwise training. These 3D printed models provide a realistic simulation environment for a neuroendoscopy procedure that allows safe and effective teaching of navigation and endoscopy in a standardized and repetitive fashion.
NASA Astrophysics Data System (ADS)
Wang, Zhaoyong; Hu, Xing; Yao, Ning
2015-03-01
At the optimized deposition parameters, Cu film was deposited by the direct current magnetron sputtering (DMS) technique and the energy filtrating magnetron sputtering (EFMS) technique. The nano-structure was charactered by x-ray diffraction. The surface morphology of the film was observed by atomic force microscopy. The optical properties of the film were measured by spectroscopic ellipsometry. The refractive index, extinction coefficient and the thickness of the film were obtained by the fitted spectroscopic ellipsometry data using the Drude-Lorentz oscillator optical model. Results suggested that a Cu film with different properties was fabricated by the EFMS technique. The film containing smaller particles is denser and the surface is smoother. The average transmission coefficient, the refractive index and the extinction coefficients are higher than those of the Cu film deposited by the DMS technique. The average transmission coefficient (400-800 nm) is more than three times higher. The refractive index and extinction coefficient (at 550 nm) are more than 36% and 14% higher, respectively.
Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.
Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B
2015-06-01
Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.
[SciELO Public Health: the performance of Cadernos de Saúde Pública and Revista de Saúde Pública].
Barata, Rita Barradas
2007-12-01
The aim of this paper was to analyze two Brazilian scientific journals included in the SciELO Library of Public Health, using a group of bibliometric indicators and scrutinizing the articles most viewed. Cadernos de Saúde Pública was accessed 3,743.59 times per month, with an average of 30.31 citations per article. The 50 articles most viewed (6.72 to 524.5 views) were mostly published in Portuguese (92%). 42% were theoretical essays, 20% surveys, and 16% descriptive studies. 42% used argumentative techniques, 34% quantitative techniques, 18% qualitative techniques, and 6% mathematical modeling. The most common themes were: health and work (50%), epidemiology (22%), and environmental health (8%). Revista de Saúde Pública was accessed 1,590.97 times per month, with an average of 26.27 citations per article. The 50 articles most viewed (7.33 and 56.50 views) were all published in Portuguese: 46% were surveys, 14% databases analysis, and 12% systematic reviews. Quantitative techniques were adopted in 66% of such articles, while mathematical modeling was the same as observed in Cadernos de Saúde Pública, as were qualitative techniques. The most common themes were health services organization (22%), nutrition (22%), health and work (18%), epidemiology (12%), and environmental health (12%).
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Test techniques for model development of repetitive service energy storage capacitors
NASA Astrophysics Data System (ADS)
Thompson, M. C.; Mauldin, G. H.
1984-03-01
The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
Model analysis for the MAGIC telescope
NASA Astrophysics Data System (ADS)
Mazin, D.; Bigongiari, C.; Goebel, F.; Moralejo, A.; Wittek, W.
The MAGIC Collaboration operates the 17m imaging Cherenkov telescope on the Canary island La Palma. The main goal of the experiment is an energy threshold below 100 GeV for primary gamma rays. The new analysis technique (model analysis) takes advantage of the high resolution (both in space and time) camera by fitting the averaged expected templates of the shower development to the measured shower images in the camera. This approach allows to recognize and reconstruct images just above the level of the night sky background light fluctuations. Progress and preliminary results of the model analysis technique will be presented.
Toward large eddy simulation of turbulent flow over an airfoil
NASA Technical Reports Server (NTRS)
Choi, Haecheon
1993-01-01
The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.
Rana, S; Cheng, CY
2014-01-01
Background: The radiobiological models describe the effects of the radiation treatment on cancer and healthy cells, and the radiobiological effects are generally characterized by the tumor control probability (TCP) and normal tissue complication probability (NTCP). Aim: The purpose of this study was to assess the radiobiological impact of RapidArc planning techniques for prostate cancer in terms of TCP and normal NTCP. Subjects and Methods: A computed tomography data set of ten cases involving low-risk prostate cancer was selected for this retrospective study. For each case, two RapidArc plans were created in Eclipse treatment planning system. The double arc (DA) plan was created using two full arcs and the single arc (SA) plan was created using one full arc. All treatment plans were calculated with anisotropic analytical algorithm. Radiobiological modeling response evaluation was performed by calculating Niemierko's equivalent uniform dose (EUD)-based Tumor TCP and NTCP values. Results: For prostate tumor, the average EUD in the SA plans was slightly higher than in the DA plans (78.10 Gy vs. 77.77 Gy; P = 0.01), but the average TCP was comparable (98.3% vs. 98.3%; P = 0.01). In comparison to the DA plans, the SA plans produced higher average EUD to bladder (40.71 Gy vs. 40.46 Gy; P = 0.03) and femoral heads (10.39 Gy vs. 9.40 Gy; P = 0.03), whereas both techniques produced NTCP well below 0.1% for bladder (P = 0.14) and femoral heads (P = 0.26). In contrast, the SA plans produced higher average NTCP compared to the DA plans (2.2% vs. 1.9%; P = 0.01). Furthermore, the EUD to rectum was slightly higher in the SA plans (62.88 Gy vs. 62.22 Gy; P = 0.01). Conclusion: The SA and DA techniques produced similar TCP for low-risk prostate cancer. The NTCP for femoral heads and bladder was comparable in the SA and DA plans; however, the SA technique resulted in higher NTCP for rectum in comparison with the DA technique. PMID:24761232
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Social Inferences from Faces: Ambient Images Generate a Three-Dimensional Model
ERIC Educational Resources Information Center
Sutherland, Clare A. M.; Oldmeadow, Julian A.; Santos, Isabel M.; Towler, John; Burt, D. Michael; Young, Andrew W.
2013-01-01
Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of…
Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters
NASA Technical Reports Server (NTRS)
Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise
2011-01-01
A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.
Predicting solar radiation based on available weather indicators
NASA Astrophysics Data System (ADS)
Sauer, Frank Joseph
Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.
Li, Wenjun; Kezele, Irina; Collins, D Louis; Zijdenbos, Alex; Keyak, Joyce; Kornak, John; Koyama, Alain; Saeed, Isra; Leblanc, Adrian; Harris, Tamara; Lu, Ying; Lang, Thomas
2007-11-01
We have developed a general framework which employs quantitative computed tomography (QCT) imaging and inter-subject image registration to model the three-dimensional structure of the hip, with the goal of quantifying changes in the spatial distribution of bone as it is affected by aging, drug treatment or mechanical unloading. We have adapted rigid and non-rigid inter-subject registration techniques to transform groups of hip QCT scans into a common reference space and to construct composite proximal femoral models. We have applied this technique to a longitudinal study of 16 astronauts who on average, incurred high losses of hip bone density during spaceflights of 4-6 months on the International Space Station (ISS). We compared the pre-flight and post-flight composite hip models, and observed the gradients of the bone loss distribution. We performed paired t-tests, on a voxel by voxel basis, corrected for multiple comparisons using false discovery rate (FDR), and observed regions inside the proximal femur that showed the most significant bone loss. To validate our registration algorithm, we selected the 16 pre-flight scans and manually marked 4 landmarks for each scan. After registration, the average distance between the mapped landmarks and the corresponding landmarks in the target scan was 2.56 mm. The average error due to manual landmark identification was 1.70 mm.
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less
Holographic Characterization of Colloidal Fractal Aggregates
NASA Astrophysics Data System (ADS)
Wang, Chen; Cheong, Fook Chiong; Ruffner, David B.; Zhong, Xiao; Ward, Michael D.; Grier, David G.
In-line holographic microscopy images of micrometer-scale fractal aggregates can be interpreted with the Lorenz-Mie theory of light scattering and an effective-sphere model to obtain each aggregate's size and the population-averaged fractal dimension. We demonstrate this technique experimentally using model fractal clusters of polystyrene nanoparticles and fractal protein aggregates composed of bovine serum albumin and bovine pancreas insulin. This technique can characterize several thousand aggregates in ten minutes and naturally distinguishes aggregates from contaminants such as silicone oil droplets. Work supported by the SBIR program of the NSF.
Worthmann, Brian M; Song, H C; Dowling, David R
2015-12-01
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.
NASA Astrophysics Data System (ADS)
Leung, Juliana Y.; Srinivasan, Sanjay
2016-09-01
Modeling transport process at large scale requires proper scale-up of subsurface heterogeneity and an understanding of its interaction with the underlying transport mechanisms. A technique based on volume averaging is applied to quantitatively assess the scaling characteristics of effective mass transfer coefficient in heterogeneous reservoir models. The effective mass transfer coefficient represents the combined contribution from diffusion and dispersion to the transport of non-reactive solute particles within a fluid phase. Although treatment of transport problems with the volume averaging technique has been published in the past, application to geological systems exhibiting realistic spatial variability remains a challenge. Previously, the authors developed a new procedure where results from a fine-scale numerical flow simulation reflecting the full physics of the transport process albeit over a sub-volume of the reservoir are integrated with the volume averaging technique to provide effective description of transport properties. The procedure is extended such that spatial averaging is performed at the local-heterogeneity scale. In this paper, the transport of a passive (non-reactive) solute is simulated on multiple reservoir models exhibiting different patterns of heterogeneities, and the scaling behavior of effective mass transfer coefficient (Keff) is examined and compared. One such set of models exhibit power-law (fractal) characteristics, and the variability of dispersion and Keff with scale is in good agreement with analytical expressions described in the literature. This work offers an insight into the impacts of heterogeneity on the scaling of effective transport parameters. A key finding is that spatial heterogeneity models with similar univariate and bivariate statistics may exhibit different scaling characteristics because of the influence of higher order statistics. More mixing is observed in the channelized models with higher-order continuity. It reinforces the notion that the flow response is influenced by the higher-order statistical description of heterogeneity. An important implication is that when scaling-up transport response from lab-scale results to the field scale, it is necessary to account for the scale-up of heterogeneity. Since the characteristics of higher-order multivariate distributions and large-scale heterogeneity are typically not captured in small-scale experiments, a reservoir modeling framework that captures the uncertainty in heterogeneity description should be adopted.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media
Gabitto, Jorge; Tsouris, Costas
2015-05-05
Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less
Garrett, John D.; Fear, Elise C.
2015-01-01
Prior information about the average dielectric properties of breast tissue can be implemented in microwave breast imaging techniques to improve the results. Rapidly providing this information relies on acquiring a limited number of measurements and processing these measurement with efficient algorithms. Previously, systems were developed to measure the transmission of microwave signals through breast tissue, and simplifications were applied to estimate the average properties. These methods provided reasonable estimates, but they were sensitive to multipath. In this paper, a new technique to analyze the average properties of breast tissues while addressing multipath is presented. Three steps are used to process transmission measurements. First, the effects of multipath were removed. In cases where multipath is present, multiple peaks were observed in the time domain. A Tukey window was used to time-gate a single peak and, therefore, select a single path through the breast. Second, the antenna response was deconvolved from the transmission coefficient to isolate the response from the tissue in the breast interior. The antenna response was determined through simulations. Finally, the complex permittivity was estimated using an iterative approach. This technique was validated using simulated and physical homogeneous breast models and tested with results taken from a recent patient study. PMID:25585106
The stock-flow model of spatial data infrastructure development refined by fuzzy logic.
Abdolmajidi, Ehsan; Harrie, Lars; Mansourian, Ali
2016-01-01
The system dynamics technique has been demonstrated to be a proper method by which to model and simulate the development of spatial data infrastructures (SDI). An SDI is a collaborative effort to manage and share spatial data at different political and administrative levels. It is comprised of various dynamically interacting quantitative and qualitative (linguistic) variables. To incorporate linguistic variables and their joint effects in an SDI-development model more effectively, we suggest employing fuzzy logic. Not all fuzzy models are able to model the dynamic behavior of SDIs properly. Therefore, this paper aims to investigate different fuzzy models and their suitability for modeling SDIs. To that end, two inference and two defuzzification methods were used for the fuzzification of the joint effect of two variables in an existing SDI model. The results show that the Average-Average inference and Center of Area defuzzification can better model the dynamics of SDI development.
The Objective Borderline Method: A Probabilistic Method for Standard Setting
ERIC Educational Resources Information Center
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim
2015-01-01
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
ERIC Educational Resources Information Center
Shaw, Susan M.; Kemeny, Lidia
1989-01-01
Looked at techniques for promoting fitness participation among adolescent girls, in particular those which emphasize the slim ideal. Relative effectiveness of posters using different models (slim, average, overweight) and different messages (slimness, activity, health) was tested using 627 female high school students. Found slim model to be most…
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my
This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, anmore » advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave velocity of rock.« less
Anatomy-based transmission factors for technique optimization in portable chest x-ray
NASA Astrophysics Data System (ADS)
Liptak, Christopher L.; Tovey, Deborah; Segars, William P.; Dong, Frank D.; Li, Xiang
2015-03-01
Portable x-ray examinations often account for a large percentage of all radiographic examinations. Currently, portable examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. Our study showed that the transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 21% to 1.9% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 21% to 3.6% when the air kerma used in the calculation represented the average signals from two discrete AEC cells behind the lung fields. These exponential relationships may be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.
Price of Fairness in Kidney Exchange
2014-05-01
solver uses branch-and-price, a technique that proves optimality by in- crementally generating only a small part of the model during tree search [8...factors like fail- ure probability and chain position, as in the probabilistic model ). We will use this multiplicative re-weighting in our experiments in...Table 2 gives the average loss in efficiency for each of these models over multiple generated pool sizes, with 40 runs per pool size per model , under
Evangelista, P.; Kumar, S.; Stohlgren, T.J.; Crall, A.W.; Newman, G.J.
2007-01-01
Predictive models of aboveground biomass of nonnative Tamarix ramosissima of various sizes were developed using destructive sampling techniques on 50 individuals and four 100-m2 plots. Each sample was measured for average height (m) of stems and canopy area (m2) prior to cutting, drying, and weighing. Five competing regression models (P < 0.05) were developed to estimate aboveground biomass of T. ramosissima using average height and/or canopy area measurements and were evaluated using Akaike's Information Criterion corrected for small sample size (AICc). Our best model (AICc = -148.69, ??AICc = 0) successfully predicted T. ramosissima aboveground biomass (R2 = 0.97) and used average height and canopy area as predictors. Our 2nd-best model, using the same predictors, was also successful in predicting aboveground biomass (R2 = 0.97, AICc = -131.71, ??AICc = 16.98). A 3rd model demonstrated high correlation between only aboveground biomass and canopy area (R2 = 0.95), while 2 additional models found high correlations between aboveground biomass and average height measurements only (R2 = 0.90 and 0.70, respectively). These models illustrate how simple field measurements, such as height and canopy area, can be used in allometric relationships to accurately predict aboveground biomass of T. ramosissima. Although a correction factor may be necessary for predictions at larger scales, the models presented will prove useful for many research and management initiatives.
Monthly streamflow forecasting with auto-regressive integrated moving average
NASA Astrophysics Data System (ADS)
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes
Gabitto, Jorge; Tsouris, Costas
2016-04-28
In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.
Darmawan, M F; Yusuf, Suhaila M; Kadir, M R Abdul; Haron, H
2015-02-01
Sex estimation is used in forensic anthropology to assist the identification of individual remains. However, the estimation techniques tend to be unique and applicable only to a certain population. This paper analyzed sex estimation on living individual child below 19 years old using the length of 19 bones of left hand applied for three classification techniques, which were Discriminant Function Analysis (DFA), Support Vector Machine (SVM) and Artificial Neural Network (ANN) multilayer perceptron. These techniques were carried out on X-ray images of the left hand taken from an Asian population data set. All the 19 bones of the left hand were measured using Free Image software, and all the techniques were performed using MATLAB. The group of age "16-19" years old and "7-9" years old were the groups that could be used for sex estimation with as their average of accuracy percentage was above 80%. ANN model was the best classification technique with the highest average of accuracy percentage in the two groups of age compared to other classification techniques. The results show that each classification technique has the best accuracy percentage on each different group of age. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chen, Roland K; Chastagner, Matthew W; Dodde, Robert E; Shih, Albert J
2013-02-01
The temporal and spatial tissue temperature profile in electrosurgical vessel sealing was experimentally measured and modeled using finite element modeling (FEM). Vessel sealing procedures are often performed near the neurovascular bundle and may cause collateral neural thermal damage. Therefore, the heat generated during electrosurgical vessel sealing is of concern among surgeons. Tissue temperature in an in vivo porcine femoral artery sealed using a bipolar electrosurgical device was studied. Three FEM techniques were incorporated to model the tissue evaporation, water loss, and fusion by manipulating the specific heat, electrical conductivity, and electrical contact resistance, respectively. These three techniques enable the FEM to accurately predict the vessel sealing tissue temperature profile. The averaged discrepancy between the experimentally measured temperature and the FEM predicted temperature at three thermistor locations is less than 7%. The maximum error is 23.9%. Effects of the three FEM techniques are also quantified.
Wolcott, Stephen W.; Snow, Robert F.
1995-01-01
An empirical technique was used to calculate the recharge to bedrock aquifers in northern Westchester County. This method requires delineation of ground-water divides within the aquifer area and values for (1) the extent of till and exposed bedrock within the aquifer area, and (2) mean annual runoff. This report contains maps and data needed for calculation of recharge in any given area within the 165square-mile study area. Recharge was computed by this technique for a 93-square-mile part of the study area and used a ground-water-flow model to evaluate the reliability of the method. A two-layer, steady-state model of the selected area was calibrated. The area consists predominantly of bedrock overlain by small localized deposits of till and stratified drill Ground-water-level and streamflow data collected in mid-November 1987 were used for model calibration. The data set approximates average annual conditions. The model was calibrated from (1) estimates of recharge as computed through the empirical technique, and (2) a range of values for hydrologic properties derived from aquifer tests and published literature. Recharge values used for model simulation appear to be reasonable for average steady-state conditions. Water-quality data were collected from 53 selected bedrock wells throughout northern Westchester County to define the background ground-water quality. The constituents and properties for which samples were analyzed included major cations and anions, temperature, pH, specific conductance, and hardness. Results indicate little difference in water quality among the bedrock aquifers within the study area. Ground water is mainly the calcium-bicarbonate type and is moderately hard. Average concentrations of sodium, sulfate, chloride, nitrate, iron, and manganese were within acceptable limits established by the U.S. Environmental Protection Agency for domestic water supply.
Hamby, D M
2002-01-01
Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.
Solute redistribution in dendritic solidification with diffusion in the solid
NASA Technical Reports Server (NTRS)
Ganesan, S.; Poirier, D. R.
1989-01-01
An investigation of solute redistribution during dendritic solidification with diffusion in the solid has been performed using numerical techniques. The extent of diffusion is characterized by the instantaneous and average diffusion parameters. These parameters are functions of the diffusion Fourier number, the partition ratio and the fraction solid. Numerical results are presented as an approximate model, which is used to predict the average diffusion parameter and calculate the composition of the interdendritic liquid during solidification.
The average magnetic field draping and consistent plasma properties of the Venus magnetotail
NASA Technical Reports Server (NTRS)
Mccomas, D. J.; Spence, H. E.; Russell, C. T.; Saunders, M. A.
1986-01-01
The detailed average draping pattern of the magnetic field in the deep Venus magnetotail is examined. The variability of the data ordered by spatial location is studied, and the groundwork is laid for developing a coordinate system which measured locations with respect to the tail structures. The reconstruction of the tail in the presence of flapping using a new technique is shown, and the average variations in the field components are examined, including the average field vectors, cross-tail current density distribution, and J x B forces as functions of location across the tail. The average downtail velocity is derived as a function of distance, and a simple model based on the field variations is defined from which the average plasma acceleration is obtained as a function of distance, density, and temperature.
A rapid radiative transfer model for reflection of solar radiation
NASA Technical Reports Server (NTRS)
Xiang, X.; Smith, E. A.; Justus, C. G.
1994-01-01
A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet
2011-10-01
The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations. Only the model for one specimen met the validation criteria for average and peak pressure of both articulations; however the experimental measures for peak pressure also exhibited high variability. MRI-based modeling can reliably be used for evaluating the contact area and contact force with similar confidence as in currently available experimental techniques. Average contact pressure, and peak contact pressure were more variable from all measurement techniques, and these measures from MRI-based modeling should be used with some caution.
Screening-level estimates of mass discharge uncertainty from point measurement methods
The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...
Foster, Katherine T; Beltz, Adriene M
2018-08-01
Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.
Estimating wildland fire rate of spread in a spatially nonuniform environment
Francis M Fujioka
1985-01-01
Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Vorndran, Shelby; Russo, Juan; Zhang, Deming; Gordon, Michael; Kostuk, Raymond
2012-10-01
In this work, a concentrating photovoltaic (CPV) design methodology is proposed which aims to maximize system efficiency for a given irradiance condition. In this technique, the acceptance angle of the system is radiometrically matched to the angular spread of the site's average irradiance conditions using a simple geometric ratio. The optical efficiency of CPV systems from flat-plate to high-concentration is plotted at all irradiance conditions. Concentrator systems are measured outdoors in various irradiance conditions to test the methodology. This modeling technique is valuable at the design stage to determine the ideal level of concentration for a CPV module. It requires only two inputs: the acceptance angle profile of the system and the site's average direct and diffuse irradiance fractions. Acceptance angle can be determined by raytracing or testing a fabricated prototype in the lab with a solar simulator. The average irradiance conditions can be found in the Typical Metrological Year (TMY3) database. Additionally, the information gained from this technique can be used to determine tracking tolerance, quantify power loss during an isolated weather event, and do more sophisticated analysis such as I-V curve simulation.
Studies of health risks associated with recreational water exposure require investigators to make choices about water quality indicator averaging techniques, exposure definitions, follow-up periods, and model specifications; but, investigators seldom describe the impact of these ...
O’Connell, Dylan P.; Thomas, David H.; Dou, Tai H.; Lamb, James M.; Feingold, Franklin; Low, Daniel A.; Fuld, Matthew K.; Sieren, Jered P.; Sloan, Chelsea M.; Shirk, Melissa A.; Hoffman, Eric A.; Hofmann, Christian
2015-01-01
Purpose: To demonstrate that a “5DCT” technique which utilizes fast helical acquisition yields the same respiratory-gated images as a commercial technique for regular, mechanically produced breathing cycles. Methods: Respiratory-gated images of an anesthetized, mechanically ventilated pig were generated using a Siemens low-pitch helical protocol and 5DCT for a range of breathing rates and amplitudes and with standard and low dose imaging protocols. 5DCT reconstructions were independently evaluated by measuring the distances between tissue positions predicted by a 5D motion model and those measured using deformable registration, as well by reconstructing the originally acquired scans. Discrepancies between the 5DCT and commercial reconstructions were measured using landmark correspondences. Results: The mean distance between model predicted tissue positions and deformably registered tissue positions over the nine datasets was 0.65 ± 0.28 mm. Reconstructions of the original scans were on average accurate to 0.78 ± 0.57 mm. Mean landmark displacement between the commercial and 5DCT images was 1.76 ± 1.25 mm while the maximum lung tissue motion over the breathing cycle had a mean value of 27.2 ± 4.6 mm. An image composed of the average of 30 deformably registered images acquired with a low dose protocol had 6 HU image noise (single standard deviation) in the heart versus 31 HU for the commercial images. Conclusions: An end to end evaluation of the 5DCT technique was conducted through landmark based comparison to breathing gated images acquired with a commercial protocol under highly regular ventilation. The techniques were found to agree to within 2 mm for most respiratory phases and most points in the lung. PMID:26133604
Fulford, Janice M.
2003-01-01
A numerical computer model, Transient Inundation Model for Rivers -- 2 Dimensional (TrimR2D), that solves the two-dimensional depth-averaged flow equations is documented and discussed. The model uses a semi-implicit, semi-Lagrangian finite-difference method. It is a variant of the Trim model and has been used successfully in estuarine environments such as San Francisco Bay. The abilities of the model are documented for three scenarios: uniform depth flows, laboratory dam-break flows, and large-scale riverine flows. The model can start computations from a ?dry? bed and converge to accurate solutions. Inflows are expressed as source terms, which limits the use of the model to sufficiently long reaches where the flow reaches equilibrium with the channel. The data sets used by the investigation demonstrate that the model accurately propagates flood waves through long river reaches and simulates dam breaks with abrupt water-surface changes.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Turkel, Eli
2006-01-01
We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for the simulation of a synthetic jet created by a single diaphragm piezoelectric actuator in quiescent air. This configuration was designated as Case 1 for the CFDVAL2004 workshop held at Williamsburg, Virginia, in March 2004. Time-averaged and instantaneous data for this case were obtained at NASA Langley Research Center, using multiple measurement techniques. Computational results for this case using one-equation Spalart-Allmaras and two-equation Menter's turbulence models are presented along with the experimental data. The effect of grid refinement, preconditioning and time-step variation are also examined in this paper.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
NASA Astrophysics Data System (ADS)
Vlemmix, T.; Eskes, H. J.; Piters, A. J. M.; Schaap, M.; Sauter, F. J.; Kelder, H.; Levelt, P. F.
2015-02-01
A 14-month data set of MAX-DOAS (Multi-Axis Differential Optical Absorption Spectroscopy) tropospheric NO2 column observations in De Bilt, the Netherlands, has been compared with the regional air quality model Lotos-Euros. The model was run on a 7×7 km2 grid, the same resolution as the emission inventory used. A study was performed to assess the effect of clouds on the retrieval accuracy of the MAX-DOAS observations. Good agreement was found between modeled and measured tropospheric NO2 columns, with an average difference of less than 1% of the average tropospheric column (14.5 · 1015 molec cm-2). The comparisons show little cloud cover dependence after cloud corrections for which ceilometer data were used. Hourly differences between observations and model show a Gaussian behavior with a standard deviation (σ) of 5.5 · 1015 molec cm-2. For daily averages of tropospheric NO2 columns, a correlation of 0.72 was found for all observations, and 0.79 for cloud free conditions. The measured and modeled tropospheric NO2 columns have an almost identical distribution over the wind direction. A significant difference between model and measurements was found for the average weekly cycle, which shows a much stronger decrease during the weekend for the observations; for the diurnal cycle, the observed range is about twice as large as the modeled range. The results of the comparison demonstrate that averaged over a long time period, the tropospheric NO2 column observations are representative for a large spatial area despite the fact that they were obtained in an urban region. This makes the MAX-DOAS technique especially suitable for validation of satellite observations and air quality models in urban regions.
Numerical Modeling of the Vertical Heat Transport Through the Diffusive Layer of the Arctic Ocean
2013-03-01
vertical heat transport through Arctic thermohaline staircases over time . Re-engaging in the inverse modeling technique that was started by Chaplin ...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...function. ........................................42 Figure 23. Temperature—Salinity plots for ITPs 1-6 (After Chaplin 2009
LITHO1.0: An Updated Crust and Lithosphere Model of the Earth
2010-09-01
wc arc uncertain what causes the remainder of the discrepancy. The measurement discrepancies are much smaller than the signal in the data, and the...short-period group velocity data measured with a new technique which are sensitive to lid properties as well as crustal thickness and average...most progress was made on surface-wave measurements . We use a cluster analysis technique to measure surface-wave group velocity from lOmHz to 40mHz
A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.
Jayachandran, V; Bonilha, M W
2003-03-01
This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.
APOLLO: a quality assessment service for single and multiple protein models.
Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin
2011-06-15
We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
NASA Astrophysics Data System (ADS)
Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.
2014-09-01
This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave velocity of rock.
Comparing methods for modelling spreading cell fronts.
Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E
2014-07-21
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development and Application of Agglomerated Multigrid Methods for Complex Geometries
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2010-01-01
We report progress in the development of agglomerated multigrid techniques for fully un- structured grids in three dimensions, building upon two previous studies focused on efficiently solving a model diffusion equation. We demonstrate a robust fully-coarsened agglomerated multigrid technique for 3D complex geometries, incorporating the following key developments: consistent and stable coarse-grid discretizations, a hierarchical agglomeration scheme, and line-agglomeration/relaxation using prismatic-cell discretizations in the highly-stretched grid regions. A signi cant speed-up in computer time is demonstrated for a model diffusion problem, the Euler equations, and the Reynolds-averaged Navier-Stokes equations for 3D realistic complex geometries.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Mathematical neuroscience: from neurons to circuits to systems.
Gutkin, Boris; Pinto, David; Ermentrout, Bard
2003-01-01
Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.
Squids in the Study of Cerebral Magnetic Field
NASA Astrophysics Data System (ADS)
Romani, G. L.; Narici, L.
The following sections are included: * INTRODUCTION * HISTORICAL OVERVIEW * NEUROMAGNETIC FIELDS AND AMBIENT NOISE * DETECTORS * Room temperature sensors * SQUIDs * DETECTION COILS * Magnetometers * Gradiometers * Balancing * Planar gradiometers * Choice of the gradiometer parameters * MODELING * Current pattern due to neural excitations * Action potentials and postsynaptic currents * The current dipole model * Neural population and detected fields * Spherically bounded medium * SPATIAL CONFIGURATION OF THE SENSORS * SOURCE LOCALIZATION * Localization procedure * Experimental accuracy and reproducibility * SIGNAL PROCESSING * Analog Filtering * Bandpass filters * Line rejection filters * DATA ANALYSIS * Analysis of evoked/event-related responses * Simple average * Selected average * Recursive techniques * Similarity analysis * Analysis of spontaneous activity * Mapping and localization * EXAMPLES OF NEUROMAGNETIC STUDIES * Neuromagnetic measurements * Studies on the normal brain * Clinical applications * Epilepsy * Tinnitus * CONCLUSIONS * ACKNOWLEDGEMENTS * REFERENCES
Research on ionospheric tomography based on variable pixel height
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui
2016-05-01
A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.
Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed
2018-04-01
The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.
Setting analyst: A practical harvest planning technique
Olivier R.M. Halleux; W. Dale Greene
2001-01-01
Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...
Shrestha, Badri Man; Haylor, John
2017-11-15
Rat models of renal transplant are used to investigate immunologic processes and responses to therapeutic agents before their translation into routine clinical practice. In this study, we have described details of rat surgical anatomy and our experiences with the microvascular surgical technique relevant to renal transplant by employing donor inferior vena cava and aortic conduits. For this study, 175 rats (151 Lewis and 24 Fisher) were used to establish the Fisher-Lewis rat model of chronic allograft injury at our institution. Anatomic and technical details were recorded during the period of training and establishment of the model. A final group of 12 transplanted rats were studied for an average duration of 51 weeks for the Lewis-to-Lewis isografts (5 rats) and 42 weeks for the Fisher-to-Lewis allografts (7 rats). Functional measurements and histology confirmed the diagnosis of chronic allograft injury. Mastering the anatomic details and microvascular surgical techniques can lead to the successful establishment of an experimental renal transplant model.
Inter-comparison of time series models of lake levels predicted by several modeling strategies
NASA Astrophysics Data System (ADS)
Khatibi, R.; Ghorbani, M. A.; Naghipour, L.; Jothiprakash, V.; Fathima, T. A.; Fazelifard, M. H.
2014-04-01
Five modeling strategies are employed to analyze water level time series of six lakes with different physical characteristics such as shape, size, altitude and range of variations. The models comprise chaos theory, Auto-Regressive Integrated Moving Average (ARIMA) - treated for seasonality and hence SARIMA, Artificial Neural Networks (ANN), Gene Expression Programming (GEP) and Multiple Linear Regression (MLR). Each is formulated on a different premise with different underlying assumptions. Chaos theory is elaborated in a greater detail as it is customary to identify the existence of chaotic signals by a number of techniques (e.g. average mutual information and false nearest neighbors) and future values are predicted using the Nonlinear Local Prediction (NLP) technique. This paper takes a critical view of past inter-comparison studies seeking a superior performance, against which it is reported that (i) the performances of all five modeling strategies vary from good to poor, hampering the recommendation of a clear-cut predictive model; (ii) the performances of the datasets of two cases are consistently better with all five modeling strategies; (iii) in other cases, their performances are poor but the results can still be fit-for-purpose; (iv) the simultaneous good performances of NLP and SARIMA pull their underlying assumptions to different ends, which cannot be reconciled. A number of arguments are presented including the culture of pluralism, according to which the various modeling strategies facilitate an insight into the data from different vantages.
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Bearing tester data compilation analysis, and reporting and bearing math modeling
NASA Technical Reports Server (NTRS)
Cody, J. C.
1986-01-01
Integration of heat transfer coefficients, modified to account for local vapor quality, into the 45 mm bearing model has been completed. The model has been evaluated with two flow rates and subcooled and saturated coolant. The evaluation showed that by increasing the flow from 3.6 to 7.0 lbs/sec the average ball temperature was decreased by 102 F, using a coolant temperature of -230 F. The average ball temperature was decreased by 63 F by decreasing the inlet coolant temperature from saturated to -230 F at a flow rate of 7.0 lbs/sec. Since other factors such as friction, cage heating, etc., affect bearing temperatures, the above bearing temperature effects should be considered as trends and not absolute values. The two phase heat transfer modification has been installed in the 57 mm bearing model and the effects on bearing temperatures have been evaluated. The average ball temperature was decreased by 60 F by increasing the flow rate from 4.6 to 9.0 lbs/sec for the subcooled case. By decreasing the inlet coolant temperature from saturation to -24 F, the average ball temperature was decreased 57 F for a flow rate of 9.0 lbs/sec. The technique of relating the two phase heat transfer coefficient to local vapor quality will be applied to the tester model and compared with test data.
NASA Astrophysics Data System (ADS)
Dasenbrock-Gammon, Nathan; Zacate, Matthew O.
2017-05-01
Baker et al. derived time-dependent expressions for calculating average number of jumps per encounter and displacement probabilities for vacancy diffusion in crystal lattice systems with infinitesimal vacancy concentrations. As shown in this work, their formulation is readily expanded to include finite vacancy concentration, which allows calculation of concentration-dependent, time-averaged quantities. This is useful because it provides a computationally efficient method to express lineshapes of nuclear spectroscopic techniques through the use of stochastic fluctuation models.
NASA Astrophysics Data System (ADS)
Zhu, Li; Najafizadeh, Laleh
2017-06-01
We investigate the problem related to the averaging procedure in functional near-infrared spectroscopy (fNIRS) brain imaging studies. Typically, to reduce noise and to empower the signal strength associated with task-induced activities, recorded signals (e.g., in response to repeated stimuli or from a group of individuals) are averaged through a point-by-point conventional averaging technique. However, due to the existence of variable latencies in recorded activities, the use of the conventional averaging technique can lead to inaccuracies and loss of information in the averaged signal, which may result in inaccurate conclusions about the functionality of the brain. To improve the averaging accuracy in the presence of variable latencies, we present an averaging framework that employs dynamic time warping (DTW) to account for the temporal variation in the alignment of fNIRS signals to be averaged. As a proof of concept, we focus on the problem of localizing task-induced active brain regions. The framework is extensively tested on experimental data (obtained from both block design and event-related design experiments) as well as on simulated data. In all cases, it is shown that the DTW-based averaging technique outperforms the conventional-based averaging technique in estimating the location of task-induced active regions in the brain, suggesting that such advanced averaging methods should be employed in fNIRS brain imaging studies.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
1987-06-01
number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample
Aortic Root Biomechanics After Sleeve and David Sparing Techniques: A Finite Element Analysis.
Tasca, Giordano; Selmi, Matteo; Votta, Emiliano; Redaelli, Paola; Sturla, Francesco; Redaelli, Alberto; Gamba, Amando
2017-05-01
Aortic root aneurysm can be treated with valve-sparing procedures. The David and Yacoub techniques have shown excellent long-term results but are technically demanding. Recently, a new and simpler procedure, the Sleeve technique, was proposed with encouraging results. We aimed to quantify the biomechanics of the initially aneurysmal aortic root (AR) after the Sleeve procedure to assess whether it induces abnormal stresses, potentially undermining its durability. Two finite element (FE) models of the physiologic and aneurysmal AR were built, accounting for the anatomical asymmetry and the nonlinear and anisotropic mechanical properties of human AR tissues. On the aneurysmal model, the Sleeve and David techniques were simulated based on the corresponding published technical features. Aortic root biomechanics throughout 2 consecutive cardiac cycles were computed in each simulated configuration. Both sparing techniques restored physiologic-like kinematics of aortic valve (AV) leaflets but induced different leaflets stresses. The time course averaged over the leaflets' bellies was 35% higher in the David model than in the Sleeve model. Commissural stresses, which were equal to 153 and 318 kPa in the physiologic and aneurysmal models, respectively, became 369 and 208 kPa in the David and Sleeve models, respectively. No intrinsic structural problems were detected in the Sleeve model that might jeopardize the durability of the procedure. If corroborated by long-term clinical outcomes, the results obtained suggest that using this new technique could successfully simplify the surgical repair of AR aneurysms and reduce intraoperative complications. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Jorgenson, D B; Haynor, D R; Bardy, G H; Kim, Y
1995-02-01
A method for constructing and solving detailed patient-specific 3-D finite element models of the human thorax is presented for use in defibrillation studies. The method utilizes the patient's own X-ray CT scan and a simplified meshing scheme to quickly and efficiently generate a model typically composed of approximately 400,000 elements. A parameter sensitivity study on one human thorax model to examine the effects of variation in assigned tissue resistivity values, level of anatomical detail included in the model, and number of CT slices used to produce the model is presented. Of the seven tissue types examined, the average left ventricular (LV) myocardial voltage gradient was most sensitive to the values of myocardial and blood resistivity. Incorrectly simplifying the model, for example modeling the heart as a homogeneous structure by ignoring the blood in the chambers, caused the average LV myocardial voltage gradient to increase by 12%. The sensitivity of the model to variations in electrode size and position was also examined. Small changes (< 2.0 cm) in electrode position caused average LV myocardial voltage gradient values to increase by up to 12%. We conclude that patient-specific 3-D finite element modeling of human thoracic electric fields is feasible and may reduce the empiric approach to insertion of implantable defibrillators and improve transthoracic defibrillation techniques.
Two-Stage Bayesian Model Averaging in Endogenous Variable Models*
Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.
2013-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
NASA Technical Reports Server (NTRS)
Brown, Robert B.
1994-01-01
A software pilot model for Space Shuttle proximity operations is developed, utilizing fuzzy logic. The model is designed to emulate a human pilot during the terminal phase of a Space Shuttle approach to the Space Station. The model uses the same sensory information available to a human pilot and is based upon existing piloting rules and techniques determined from analysis of human pilot performance. Such a model is needed to generate numerous rendezvous simulations to various Space Station assembly stages for analysis of current NASA procedures and plume impingement loads on the Space Station. The advantages of a fuzzy logic pilot model are demonstrated by comparing its performance with NASA's man-in-the-loop simulations and with a similar model based upon traditional Boolean logic. The fuzzy model is shown to respond well from a number of initial conditions, with results typical of an average human. In addition, the ability to model different individual piloting techniques and new piloting rules is demonstrated.
NASA Astrophysics Data System (ADS)
Yuksel, Heba; Davis, Christopher C.
2006-09-01
Intensity fluctuations at the receiver in free space optical (FSO) communication links lead to a received power variance that depends on the size of the receiver aperture. Increasing the size of the receiver aperture reduces the power variance. This effect of the receiver size on power variance is called aperture averaging. If there were no aperture size limitation at the receiver, then there would be no turbulence-induced scintillation. In practice, there is always a tradeoff between aperture size, transceiver weight, and potential transceiver agility for pointing, acquisition and tracking (PAT) of FSO communication links. We have developed a geometrical simulation model to predict the aperture averaging factor. This model is used to simulate the aperture averaging effect at given range by using a large number of rays, Gaussian as well as uniformly distributed, propagating through simulated turbulence into a circular receiver of varying aperture size. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. For each statistical representation of the atmosphere, the three-dimensional trajectory of each ray is analyzed using geometrical optics. These Monte Carlo techniques have proved capable of assessing the aperture averaging effect, in particular, the quantitative expected reduction in intensity fluctuations with increasing aperture diameter. In addition, beam wander results have demonstrated the range-cubed dependence of mean-squared beam wander. An effective turbulence parameter can also be determined by correlating beam wander behavior with the path length.
Modeling of turbulent separated flows for aerodynamic applications
NASA Technical Reports Server (NTRS)
Marvin, J. G.
1983-01-01
Steady, high speed, compressible separated flows modeled through numerical simulations resulting from solutions of the mass-averaged Navier-Stokes equations are reviewed. Emphasis is placed on benchmark flows that represent simplified (but realistic) aerodynamic phenomena. These include impinging shock waves, compression corners, glancing shock waves, trailing edge regions, and supersonic high angle of attack flows. A critical assessment of modeling capabilities is provided by comparing the numerical simulations with experiment. The importance of combining experiment, numerical algorithm, grid, and turbulence model to effectively develop this potentially powerful simulation technique is stressed.
Apps to promote physical activity among adults: a review and content analysis.
Middelweerd, Anouk; Mollee, Julia S; van der Wal, C Natalie; Brug, Johannes; Te Velde, Saskia J
2014-07-25
In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. On average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.
Rabalais, R David; Burger, Evalina; Lu, Yun; Mansour, Alfred; Baratta, Richard V
2008-02-01
This study compared the biomechanical properties of 2 tension-band techniques with stainless steel wire and ultra high molecular weight polyethylene (UHMWPE) cable in a patella fracture model. Transverse patella fractures were simulated in 8 cadaver knees and fixated with figure-of-8 and parallel wire configurations in combination with Kirschner wires. Identical configurations were tested with UHMWPE cable. Specimens were mounted to a testing apparatus and the quadriceps was used to extend the knees from 90 degrees to 0 degrees; 4 knees were tested under monotonic loading, and 4 knees were tested under cyclic loading. Under monotonic loading, average fracture gap was 0.50 and 0.57 mm for steel wire and UHMWPE cable, respectively, in the figure-of-8 construct compared with 0.16 and 0.04 mm, respectively, in the parallel wire construct. Under cyclic loading, average fracture gap was 1.45 and 1.66 mm for steel wire and UHMWPE cable, respectively, in the figure-of-8 construct compared with 0.45 and 0.60 mm, respectively, in the parallel wire construct. A statistically significant effect of technique was found, with the parallel wire construct performing better than the figure-of-8 construct in both loading models. There was no effect of material or interaction. In this biomechanical model, parallel wires performed better than the figure-of-8 configuration in both loading regimens, and UHMWPE cable performed similarly to 18-gauge steel wire.
Loading Rate Effects on the One-Dimensional Compressibility of Four Partially Saturated Soils
1986-12-01
representations are referred to as constitutive models. Numerous constitutive models incorporating loading rate effects have been developed ( Baladi and Rohani...and probably more indicative of the true values of applied pressure and average strain produced during the test. A technique developed by Baladi and...Sand," Technical Report No. AFWL-TR-66-146, Air Force Weapons Laboratory, Kirtland Air Force Base, New Mexico, June, 1967. 4. Baladi , George Y., and
A new technique for measuring aerosols with moonlight observations and a sky background model
NASA Astrophysics Data System (ADS)
Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie
2014-05-01
There have been an ample number of studies on aerosols in urban, daylight conditions, but few for remote, nocturnal aerosols. We have developed a new technique for investigating such aerosols using our sky background model and astronomical observations. With a dedicated observing proposal we have successfully tested this technique for nocturnal, remote aerosol studies. This technique relies on three requirements: (a) sky background model, (b) observations taken with scattered moonlight, and (c) spectrophotometric standard star observations for flux calibrations. The sky background model was developed for the European Southern Observatory and is optimized for the Very Large Telescope at Cerro Paranal in the Atacama desert in Chile. This is a remote location with almost no urban aerosols. It is well suited for studying remote background aerosols that are normally difficult to detect. Our sky background model has an uncertainty of around 20 percent and the scattered moonlight portion is even more accurate. The last two requirements are having astronomical observations with moonlight and of standard stars at different airmasses, all during the same night. We had a dedicated observing proposal at Cerro Paranal with the instrument X-Shooter to use as a case study for this method. X-Shooter is a medium resolution, echelle spectrograph which covers the wavelengths from 0.3 to 2.5 micrometers. We observed plain sky at six different distances (7, 13, 20, 45, 90, and 110 degrees) to the Moon for three different Moon phases (between full and half). Also direct observations of spectrophotometric standard stars were taken at two different airmasses for each night to measure the extinction curve via the Langley method. This is an ideal data set for testing this technique. The underlying assumption is that all components, other than the atmospheric conditions (specifically aerosols and airglow), can be calculated with the model for the given observing parameters. The scattered moonlight model is designed for the average atmospheric conditions at Cerro Paranal. The Mie scattering is calculated for the average distribution of aerosol particles, but this input can be modified. We can avoid the airglow emission lines, and near full Moon the airglow continuum can be ignored. In the case study, by comparing the scattered moonlight for the various angles and wavelengths along with the extinction curve from the standard stars, we can iteratively find the optimal aerosol size distribution for the time of observation. We will present this new technique, the results from this case study, and how it can be implemented for investigating aerosols using the X-Shooter archive and other astronomical archives.
Mirrored continuum and molecular scale simulations of the ignition of high-pressure phases of RDX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kibaek; Stewart, D. Scott, E-mail: santc@illinois.edu, E-mail: dss@illinois.edu; Joshi, Kaushik
2016-05-14
We present a mirrored atomistic and continuum framework that is used to describe the ignition of energetic materials, and a high-pressure phase of RDX in particular. The continuum formulation uses meaningful averages of thermodynamic properties obtained from the atomistic simulation and a simplification of enormously complex reaction kinetics. In particular, components are identified based on molecular weight bin averages and our methodology assumes that both the averaged atomistic and continuum simulations are represented on the same time and length scales. The atomistic simulations of thermally initiated ignition of RDX are performed using reactive molecular dynamics (RMD). The continuum model ismore » based on multi-component thermodynamics and uses a kinetics scheme that describes observed chemical changes of the averaged atomistic simulations. Thus the mirrored continuum simulations mimic the rapid change in pressure, temperature, and average molecular weight of species in the reactive mixture. This mirroring enables a new technique to simplify the chemistry obtained from reactive MD simulations while retaining the observed features and spatial and temporal scales from both the RMD and continuum model. The primary benefit of this approach is a potentially powerful, but familiar way to interpret the atomistic simulations and understand the chemical events and reaction rates. The approach is quite general and thus can provide a way to model chemistry based on atomistic simulations and extend the reach of those simulations.« less
Ground Vibration Test Planning and Pre-Test Analysis for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Bedrossian, Herand; Tinker, Michael L.; Hidalgo, Homero
2000-01-01
This paper describes the results of the modal test planning and the pre-test analysis for the X-33 vehicle. The pre-test analysis included the selection of the target modes, selection of the sensor and shaker locations and the development of an accurate Test Analysis Model (TAM). For target mode selection, four techniques were considered, one based on the Modal Cost technique, one based on Balanced Singular Value technique, a technique known as the Root Sum Squared (RSS) method, and a Modal Kinetic Energy (MKE) approach. For selecting sensor locations, four techniques were also considered; one based on the Weighted Average Kinetic Energy (WAKE), one based on Guyan Reduction (GR), one emphasizing engineering judgment, and one based on an optimum sensor selection technique using Genetic Algorithm (GA) search technique combined with a criteria based on Hankel Singular Values (HSV's). For selecting shaker locations, four techniques were also considered; one based on the Weighted Average Driving Point Residue (WADPR), one based on engineering judgment and accessibility considerations, a frequency response method, and an optimum shaker location selection based on a GA search technique combined with a criteria based on HSV's. To evaluate the effectiveness of the proposed sensor and shaker locations for exciting the target modes, extensive numerical simulations were performed. Multivariate Mode Indicator Function (MMIF) was used to evaluate the effectiveness of each sensor & shaker set with respect to modal parameter identification. Several TAM reduction techniques were considered including, Guyan, IRS, Modal, and Hybrid. Based on a pre-test cross-orthogonality checks using various reduction techniques, a Hybrid TAM reduction technique was selected and was used for all three vehicle fuel level configurations.
Robust Hidden Markov Model based intelligent blood vessel detection of fundus images.
Hassan, Mehdi; Amin, Muhammad; Murtza, Iqbal; Khan, Asifullah; Chaudhry, Asmatullah
2017-11-01
In this paper, we consider the challenging problem of detecting retinal vessel networks. Precise detection of retinal vessel networks is vital for accurate eye disease diagnosis. Most of the blood vessel tracking techniques may not properly track vessels in presence of vessels' occlusion. Owing to problem in sensor resolution or acquisition of fundus images, it is possible that some part of vessel may occlude. In this scenario, it becomes a challenging task to accurately trace these vital vessels. For this purpose, we have proposed a new robust and intelligent retinal vessel detection technique on Hidden Markov Model. The proposed model is able to successfully track vessels in the presence of occlusion. The effectiveness of the proposed technique is evaluated on publically available standard DRIVE dataset of the fundus images. The experiments show that the proposed technique not only outperforms the other state of the art methodologies of retinal blood vessels segmentation, but it is also capable of accurate occlusion handling in retinal vessel networks. The proposed technique offers better average classification accuracy, sensitivity, specificity, and area under the curve (AUC) of 95.7%, 81.0%, 97.0%, and 90.0% respectively, which shows the usefulness of the proposed technique. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Muschinski, A.; Hu, K.; Root, L. M.; Tichkule, S.; Wijesundara, S. N.
2010-12-01
Mean values and fluctuations of angles-of-arrival (AOAs) of light emitted from astronomical or terrestrial sources and observed through a telescope equipped with a CCD camera carry quantitative information about certain statistics of the wind and temperature field, integrated along the propagation path. While scintillometry (i.e., the retrieval of atmospheric quantities from light intensity fluctuations) has been a popular technique among micrometeorologists for many years, there have been relatively few attempts to utilize AOA observations to probe the atmospheric surface layer (ASL). Here we report results from a field experiment that we conducted at the Boulder Atmospheric Observatory (BAO) site near Erie, CO, in June 2010. During the night of 15/16 June, the ASL was characterized by intermittent turbulence and intermittent gravity-wave events. We measured temperature and wind with 12 sonics (R.M. Young, Model 81000, sampling rate 31 Hz) mounted on two portable towers at altitudes between 1.45 m and 4.84 m AGL; air pressure with two quartz-crystal barometers (Paroscientific, 10 Hz); and AOAs by means of a CCD camera (Lumenera, Model 075M, thirty 640x480 frames per second) attached to a 14-inch, Schmidt-Cassegrain telescope (Meade, Model LX200GPS) pointing at a rectangular array of four test lights (LEDs, vertical spacing 8 cm, horizontal spacing 10 cm) located at a distance of 182 m. The optical path was horizontal and 1.7 m above flat ground. The two towers were located 2 m away from the optical path. In our presentation, we focus on AOA retrievals of the following quantities: temporal fluctuations of the path-averaged, vertical temperature gradient; mean values and fluctuations of the path-averaged, lateral wind velocity; and mean values and fluctuations of the path-averaged temperature turbulence structure parameter. We compare the AOA retrievals with the collocated and simultaneous point measurements obtained with the sonics, and we analyze our observations in the framework of the Monin-Obukhov theory. The AOA techniques enable us to detect temporal fluctuations of the path-averaged vertical temperature gradient (estimated over a height increment defined by the telescope's aperture diameter) down to a few millikelvins per meter, which probably cannot be achieved with sonics. Extremely small wind velocities can also be resolved. Therefore, AOA techniques are well suited for observations of the nocturnal surface layer under quiet conditions. AOA retrieval techniques have major advantages over scintillometric techniques because AOAs can be understood within the framework of the weak-scattering theory or even geometrical optics (the eikonal-fluctuation theory), while the well-known "saturation effect" makes the weak-scattering theory invalid for intensity fluctuations in the majority of cases of practical relevance.
The Gap in Big Data: Getting to Wellbeing, Strengths, and a Whole-person Perspective
Peters, Judith; Schlesner, Sara; Vanderboom, Catherine E.; Holland, Diane E.
2015-01-01
Background: Electronic health records (EHRs) provide a clinical view of patient health. EHR data are becoming available in large data sets and enabling research that will transform the landscape of healthcare research. Methods are needed to incorporate wellbeing dimensions and strengths in large data sets. The purpose of this study was to examine the potential alignment of the Wellbeing Model with a clinical interface terminology standard, the Omaha System, for documenting wellbeing assessments. Objective: To map the Omaha System and Wellbeing Model for use in a clinical EHR wellbeing assessment and to evaluate the feasibility of describing strengths and needs of seniors generated through this assessment. Methods: The Wellbeing Model and Omaha System were mapped using concept mapping techniques. Based on this mapping, a wellbeing assessment was developed and implemented within a clinical EHR. Strengths indicators and signs/symptoms data for 5 seniors living in a residential community were abstracted from wellbeing assessments and analyzed using standard descriptive statistics and pattern visualization techniques. Results: Initial mapping agreement was 93.5%, with differences resolved by consensus. Wellbeing data analysis showed seniors had an average of 34.8 (range=22-49) strengths indicators for 22.8 concepts. They had an average of 6.4 (range=4-8) signs/symptoms for an average of 3.2 (range=2-5) concepts. The ratio of strengths indicators to signs/symptoms was 6:1 (range 2.8-9.6). Problem concepts with more signs/symptoms had fewer strengths. Conclusion: Together, the Wellbeing Model and the Omaha System have potential to enable a whole-person perspective and enhance the potential for a wellbeing perspective in big data research in healthcare. PMID:25984416
The Gap in Big Data: Getting to Wellbeing, Strengths, and a Whole-person Perspective.
Monsen, Karen A; Peters, Judith; Schlesner, Sara; Vanderboom, Catherine E; Holland, Diane E
2015-05-01
Electronic health records (EHRs) provide a clinical view of patient health. EHR data are becoming available in large data sets and enabling research that will transform the landscape of healthcare research. Methods are needed to incorporate wellbeing dimensions and strengths in large data sets. The purpose of this study was to examine the potential alignment of the Wellbeing Model with a clinical interface terminology standard, the Omaha System, for documenting wellbeing assessments. To map the Omaha System and Wellbeing Model for use in a clinical EHR wellbeing assessment and to evaluate the feasibility of describing strengths and needs of seniors generated through this assessment. The Wellbeing Model and Omaha System were mapped using concept mapping techniques. Based on this mapping, a wellbeing assessment was developed and implemented within a clinical EHR. Strengths indicators and signs/symptoms data for 5 seniors living in a residential community were abstracted from wellbeing assessments and analyzed using standard descriptive statistics and pattern visualization techniques. Initial mapping agreement was 93.5%, with differences resolved by consensus. Wellbeing data analysis showed seniors had an average of 34.8 (range=22-49) strengths indicators for 22.8 concepts. They had an average of 6.4 (range=4-8) signs/symptoms for an average of 3.2 (range=2-5) concepts. The ratio of strengths indicators to signs/symptoms was 6:1 (range 2.8-9.6). Problem concepts with more signs/symptoms had fewer strengths. Together, the Wellbeing Model and the Omaha System have potential to enable a whole-person perspective and enhance the potential for a wellbeing perspective in big data research in healthcare.
Marsac, L; Chauvet, D; La Greca, R; Boch, A-L; Chaumoitre, K; Tanter, M; Aubry, J-F
2017-09-01
Transcranial brain therapy has recently emerged as a non-invasive strategy for the treatment of various neurological diseases, such as essential tremor or neurogenic pain. However, treatments require millimetre-scale accuracy. The use of high frequencies (typically ≥1 MHz) decreases the ultrasonic wavelength to the millimetre scale, thereby increasing the clinical accuracy and lowering the probability of cavitation, which improves the safety of the technique compared with the use of low-frequency devices that operate at 220 kHz. Nevertheless, the skull produces greater distortions of high-frequency waves relative to low-frequency waves. High-frequency waves require high-performance adaptive focusing techniques, based on modelling the wave propagation through the skull. This study sought to optimise the acoustical modelling of the skull based on computed tomography (CT) for a 1 MHz clinical brain therapy system. The best model tested in this article corresponded to a maximum speed of sound of 4000 m.s -1 in the skull bone, and it restored 86% of the optimal pressure amplitude on average in a collection of six human skulls. Compared with uncorrected focusing, the optimised non-invasive correction led to an average increase of 99% in the maximum pressure amplitude around the target and an average decrease of 48% in the distance between the peak pressure and the selected target. The attenuation through the skulls was also assessed within the bandwidth of the transducers, and it was found to vary in the range of 10 ± 3 dB at 800 kHz and 16 ± 3 dB at 1.3 MHz.
Meisner, Eric M; Hager, Gregory D; Ishman, Stacey L; Brown, David; Tunkel, David E; Ishii, Masaru
2013-11-01
To evaluate the accuracy of three-dimensional (3D) airway reconstructions obtained using quantitative endoscopy (QE). We developed this novel technique to reconstruct precise 3D representations of airway geometries from endoscopic video streams. This method, based on machine vision methodologies, uses a post-processing step of the standard videos obtained during routine laryngoscopy and bronchoscopy. We hypothesize that this method is precise and will generate assessment of airway size and shape similar to those obtained using computed tomography (CT). This study was approved by the institutional review board (IRB). We analyzed video sequences from pediatric patients receiving rigid bronchoscopy. We generated 3D scaled airway models of the subglottis, trachea, and carina using QE. These models were compared to 3D airway models generated from CT. We used the CT data as the gold standard measure of airway size, and used a mixed linear model to estimate the average error in cross-sectional area and effective diameter for QE. The average error in cross sectional area (area sliced perpendicular to the long axis of the airway) was 7.7 mm(2) (variance 33.447 mm(4)). The average error in effective diameter was 0.38775 mm (variance 2.45 mm(2)), approximately 9% error. Our pilot study suggests that QE can be used to generate precise 3D reconstructions of airways. This technique is atraumatic, does not require ionizing radiation, and integrates easily into standard airway assessment protocols. We conjecture that this technology will be useful for staging airway disease and assessing surgical outcomes. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego
2010-11-01
Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
Models of brachial to finger pulse wave distortion and pressure decrement.
Gizdulich, P; Prentza, A; Wesseling, K H
1997-03-01
To model the pulse wave distortion and pressure decrement occurring between brachial and finger arteries. Distortion reversion and decrement correction were also our aims. Brachial artery pressure was recorded intra-arterially and finger pressure was recorded non-invasively by the Finapres technique in 53 adult human subjects. Mean pressure was subtracted from each pressure waveform and Fourier analysis applied to the pulsations. A distortion model was estimated for each subject and averaged over the group. The average inverse model was applied to the full finger pressure waveform. The pressure decrement was modelled by multiple regression on finger systolic and diastolic levels. Waveform distortion could be described by a general, frequency dependent model having a resonance at 7.3 Hz. The general inverse model has an anti-resonance at this frequency. It converts finger to brachial pulsations thereby reducing average waveform distortion from 9.7 (s.d. 3.2) mmHg per sample for the finger pulse to 3.7 (1.7) mmHg for the converted pulse. Systolic and diastolic level differences between finger and brachial arterial pressures changed from -4 (15) and -8 (11) to +8 (14) and +8 (12) mmHg, respectively, after inverse modelling, with pulse pressures correct on average. The pressure decrement model reduced both the mean and the standard deviation of systolic and diastolic level differences to 0 (13) and 0 (8) mmHg. Diastolic differences were thus reduced most. Brachial to finger pulse wave distortion due to wave reflection in arteries is almost identical in all subjects and can be modelled by a single resonance. The pressure decrement due to flow in arteries is greatest for high pulse pressures superimposed on low means.
Turbine Engine Flowpath Averaging Techniques
1980-10-01
u~%x AEDC- TMR- 8 I-G 1 • R. P TURBINE ENGINE FLOWPATH AVERAGING TECHNIQUES T. W. Skiles ARO, Inc. October 1980 Final Report for Period...COVERED 00-01-1980 to 00-10-1980 4. TITLE AND SUBTITLE Turbine Engine Flowpath Averaging Techniques 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...property for gas turbine engines were investigated. The investigation consisted of a literature review and review of turbine engine current flowpath
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
NASA Astrophysics Data System (ADS)
Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew
2018-02-01
Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) business as usual
emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.
Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J
2017-04-01
Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.
Modal identification of structures by a novel approach based on FDD-wavelet method
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2014-02-01
An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.
A novel application of artificial neural network for wind speed estimation
NASA Astrophysics Data System (ADS)
Fang, Da; Wang, Jianzhou
2017-05-01
Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.
Seyed, Mohammadali Rahmati; Mostafa, Rostami; Borhan, Beigzadeh
2018-04-27
The parametric optimization techniques have been widely employed to predict human gait trajectories; however, their applications to reveal the other aspects of gait are questionable. The aim of this study is to investigate whether or not the gait prediction model is able to justify the movement trajectories for the higher average velocities. A planar, seven-segment model with sixteen muscle groups was used to represent human neuro-musculoskeletal dynamics. At first, the joint angles, ground reaction forces (GRFs) and muscle activations were predicted and validated for normal average velocity (1.55 m/s) in the single support phase (SSP) by minimizing energy expenditure, which is subject to the non-linear constraints of the gait. The unconstrained system dynamics of extended inverse dynamics (USDEID) approach was used to estimate muscle activations. Then by scaling time and applying the same procedure, the movement trajectories were predicted for higher average velocities (from 2.07 m/s to 4.07 m/s) and compared to the pattern of movement with fast walking speed. The comparison indicated a high level of compatibility between the experimental and predicted results, except for the vertical position of the center of gravity (COG). It was concluded that the gait prediction model can be effectively used to predict gait trajectories for higher average velocities.
Fagan, William F; Lutscher, Frithjof
2006-04-01
Spatially explicit models for populations are often difficult to tackle mathematically and, in addition, require detailed data on individual movement behavior that are not easily obtained. An approximation known as the "average dispersal success" provides a tool for converting complex models, which may include stage structure and a mechanistic description of dispersal, into a simple matrix model. This simpler matrix model has two key advantages. First, it is easier to parameterize from the types of empirical data typically available to conservation biologists, such as survivorship, fecundity, and the fraction of juveniles produced in a study area that also recruit within the study area. Second, it is more amenable to theoretical investigation. Here, we use the average dispersal success approximation to develop estimates of the critical reserve size for systems comprising single patches or simple metapopulations. The quantitative approach can be used for both plants and animals; however, to provide a concrete example of the technique's utility, we focus on a special case pertinent to animals. Specifically, for territorial animals, we can characterize such an estimate of minimum viable habitat area in terms of the number of home ranges that the reserve contains. Consequently, the average dispersal success framework provides a framework through which home range size, natal dispersal distances, and metapopulation dynamics can be linked to reserve design. We briefly illustrate the approach using empirical data for the swift fox (Vulpes velox).
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605
Visual feature extraction from voxel-weighted averaging of stimulus images in 2 fMRI studies.
Hart, Corey B; Rose, William J
2013-11-01
Multiple studies have provided evidence for distributed object representation in the brain, with several recent experiments leveraging basis function estimates for partial image reconstruction from fMRI data. Using a novel combination of statistical decomposition, generalized linear models, and stimulus averaging on previously examined image sets and Bayesian regression of recorded fMRI activity during presentation of these data sets, we identify a subset of relevant voxels that appear to code for covarying object features. Using a technique we term "voxel-weighted averaging," we isolate image filters that these voxels appear to implement. The results, though very cursory, appear to have significant implications for hierarchical and deep-learning-type approaches toward the understanding of neural coding and representation.
Showalter, Brent L.; DeLucca, John F.; Peloquin, John M.; Cortes, Daniel H.; Yoder, Jonathon H.; Jacobs, Nathan T.; Wright, Alexander C.; Gee, James C.; Vresilovic, Edward J.; Elliott, Dawn M.
2017-01-01
Tissue strain is an important indicator of mechanical function, but is difficult to noninvasively measure in the intervertebral disc. The objective of this study was to generate a disc strain template, a 3D average of disc strain, of a group of human L4–L5 discs loaded in axial compression. To do so, magnetic resonance images of uncompressed discs were used to create an average disc shape. Next, the strain tensors were calculated pixel-wise by using a previously developed registration algorithm. Individual disc strain tensor components were then transformed to the template space and averaged to create the disc strain template. The strain template reduced individual variability while highlighting group trends. For example, higher axial and circumferential strains were present in the lateral and posterolateral regions of the disc, which may lead to annular tears. This quantification of group-level trends in local 3D strain is a significant step forward in the study of disc biomechanics. These trends were compared to a finite element model that had been previously validated against the disc-level mechanical response. Depending on the strain component, 81–99% of the regions within the finite element model had calculated strains within one standard deviation of the template strain results. The template creation technique provides a new measurement technique useful for a wide range of studies, including more complex loading conditions, the effect of disc pathologies and degeneration, damage mechanisms, and design and evaluation of treatments. PMID:26694516
Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline
2014-01-01
In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248
The sterile-male-release technique in Great Lakes sea lamprey management
Twohey, Michael B.; Heinrich, John W.; Seelye, James G.; Fredricks, Kim T.; Bergstedt, Roger A.; Kaye, Cheryl A.; Scholefield, Ron J.; McDonald, Rodney B.; Christie, Gavin C.
2003-01-01
The implementation of a sterile-male-release technique from 1991 through 1999 and evaluation of its effectiveness in the Great Lakes sea lamprey (Petromyzon marinus) management program is reviewed. Male sea lampreys were injected with the chemosterilant bisazir (P,P-bis(1-aziridinyl)-N-methylphosphinothioic amide) using a robotic device. Quality assurance testing indicated the device delivered a consistent and effective dose of bisazir. Viability of embryos in an untreated control group was 64% compared to 1% in a treatment group. A task force developed nine hypotheses to guide implementation and evaluation of the technique. An annual average of 26,000 male sea lampreys was harvested from as many as 17 Great Lakes tributaries for use in the technique. An annual average of 16,100 sterilized males was released into 33 tributaries of Lake Superior to achieve a theoretical 59% reduction in larval production during 1991 to 1996. The average number of sterile males released in the St. Marys River increased from 4,000 during 1991 to 1996 to 20,100 during 1997 to 1999. The theoretical reduc-stertion in reproduction when combined with trapping was 57% during 1991 to 1996 and 86% during 1997 to 1999. Evaluation studies demonstrated that sterilized males were competitive and reduced production of larvae in streams. Field studies and simulation models suggest reductions in reproduction will result in fewer recruits, but there is risk of periodic high recruitment events independent of sterile-male release. Strategies to reduce reproduction will be most reliable when low densities of reproducing females are achieved. Expansion of the technique is limited by access to additional males for sterilization. Sterile-male release and other alternative controls are important in delivering integrated pest management and in reducing reliance on pesticides.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sawakuchi, GO
2014-08-15
In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less
NASA Astrophysics Data System (ADS)
Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir
2013-07-01
The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.
Probing the solar corona with very long baseline interferometry.
Soja, B; Heinkelmann, R; Schuh, H
2014-06-20
Understanding and monitoring the solar corona and solar wind is important for many applications like telecommunications or geomagnetic studies. Coronal electron density models have been derived by various techniques over the last 45 years, principally by analysing the effect of the corona on spacecraft tracking. Here we show that recent observational data from very long baseline interferometry (VLBI), a radio technique crucial for astrophysics and geodesy, could be used to develop electron density models of the Sun's corona. The VLBI results agree well with previous models from spacecraft measurements. They also show that the simple spherical electron density model is violated by regional density variations and that on average the electron density in active regions is about three times that of low-density regions. Unlike spacecraft tracking, a VLBI campaign would be possible on a regular basis and would provide highly resolved spatial-temporal samplings over a complete solar cycle.
Li, Qiongge; Chan, Maria F
2017-01-01
Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field. © 2016 New York Academy of Sciences.
Preliminary evaluation of spectral, normal and meteorological crop stage estimation approaches
NASA Technical Reports Server (NTRS)
Cate, R. B.; Artley, J. A.; Doraiswamy, P. C.; Hodges, T.; Kinsler, M. C.; Phinney, D. E.; Sestak, M. L. (Principal Investigator)
1980-01-01
Several of the projects in the AgRISTARS program require crop phenology information, including classification, acreage and yield estimation, and detection of episodal events. This study evaluates several crop calendar estimation techniques for their potential use in the program. The techniques, although generic in approach, were developed and tested on spring wheat data collected in 1978. There are three basic approaches to crop stage estimation: historical averages for an area (normal crop calendars), agrometeorological modeling of known crop-weather relationships agrometeorological (agromet) crop calendars, and interpretation of spectral signatures (spectral crop calendars). In all, 10 combinations of planting and biostage estimation models were evaluated. Dates of stage occurrence are estimated with biases between -4 and +4 days while root mean square errors range from 10 to 15 days. Results are inconclusive as to the superiority of any of the models and further evaluation of the models with the 1979 data set is recommended.
NASA Astrophysics Data System (ADS)
Beyrich, F.; Bange, J.; Hartogensis, O.; Raasch, S.
2009-09-01
The turbulent exchange of heat and water vapour are essential land surface - atmosphere interaction processes in the local, regional and global energy and water cycles. Scintillometry can be considered as the only technique presently available for the quasi-operational experimental determination of area-averaged turbulent fluxes needed to validate the fluxes simulated by regional atmospheric models or derived from satellite images at a horizontal scale of a few kilometres. While scintillometry has found increasing application over the last years, some fundamental issues related to its use still need further investigation. In particular, no studies are known so far to reproduce the path-averaged structure parameters measured by scintillometers by independent measurements or modelling techniques. The LITFASS-2009 field experiment has been performed in the area around the Meteorological Observatory Lindenberg / Richard-Aßmann-Observatory in Germany during summer 2009. It was designed to investigate the spatial (horizontal and vertical) and temporal variability of structure parameters (underlying the scintillometer principle) over moderately heterogeneous terrain. The experiment essentially relied on a coupling of eddy-covariance measurements, scintillometry and airborne measurements with an unmanned autonomous aircraft able to strictly fly along the scintillometer path. Data interpretation will be supported by numerical modelling using a large-eddy simulation (LES) model. The paper will describe the design of the experiment. First preliminary results from the measurements will be presented.
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
ERIC Educational Resources Information Center
Chaney, Bradford
2016-01-01
The primary technique that many researchers use to analyze data from randomized control trials (RCTs)--detecting the average treatment effect (ATE)--imposes assumptions upon the data that often are not correct. Both theory and past research suggest that treatments may have significant impacts on subgroups even when showing no overall effect.…
ERIC Educational Resources Information Center
Levy, Dan; Duncan, Greg J.
This study assessed the impact of family childhood income on completed years of schooling using fixed effects techniques to eliminate biases associated with omission of unmeasured family characteristics. It also examined the importance of timing of family income, estimating models that related years of completed schooling to average levels of…
Bennema, S C; Molento, M B; Scholte, R G; Carvalho, O S; Pritsch, I
2017-11-01
Fascioliasis is a condition caused by the trematode Fasciola hepatica. In this paper, the spatial distribution of F. hepatica in bovines in Brazil was modelled using a decision tree approach and a logistic regression, combined with a geographic information system (GIS) query. In the decision tree and the logistic model, isothermality had the strongest influence on disease prevalence. Also, the 50-year average precipitation in the warmest quarter of the year was included as a risk factor, having a negative influence on the parasite prevalence. The risk maps developed using both techniques, showed a predicted higher prevalence mainly in the South of Brazil. The prediction performance seemed to be high, but both techniques failed to reach a high accuracy in predicting the medium and high prevalence classes to the entire country. The GIS query map, based on the range of isothermality, minimum temperature of coldest month, precipitation of warmest quarter of the year, altitude and the average dailyland surface temperature, showed a possibility of presence of F. hepatica in a very large area. The risk maps produced using these methods can be used to focus activities of animal and public health programmes, even on non-evaluated F. hepatica areas.
Characterizing Detrended Fluctuation Analysis of multifractional Brownian motion
NASA Astrophysics Data System (ADS)
Setty, V. A.; Sharma, A. S.
2015-02-01
The Hurst exponent (H) is widely used to quantify long range dependence in time series data and is estimated using several well known techniques. Recognizing its ability to remove trends the Detrended Fluctuation Analysis (DFA) is used extensively to estimate a Hurst exponent in non-stationary data. Multifractional Brownian motion (mBm) broadly encompasses a set of models of non-stationary data exhibiting time varying Hurst exponents, H(t) as against a constant H. Recently, there has been a growing interest in time dependence of H(t) and sliding window techniques have been used to estimate a local time average of the exponent. This brought to fore the ability of DFA to estimate scaling exponents in systems with time varying H(t) , such as mBm. This paper characterizes the performance of DFA on mBm data with linearly varying H(t) and further test the robustness of estimated time average with respect to data and technique related parameters. Our results serve as a bench-mark for using DFA as a sliding window estimator to obtain H(t) from time series data.
Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun
2018-02-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance. However, the utility of TWA, typical, and peak techniques for job design and intervention is dubious.
NASA Astrophysics Data System (ADS)
Abrokwah, K.; O'Reilly, A. M.
2017-12-01
Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.
Bellali, Hedia; Ben-Alaya, Nissaf; Saez, Marc; Malouche, Dhafer; Chahed, Mohamed Kouni
2017-01-01
Transmission of zoonotic cutaneous leishmaniasis (ZCL) depends on the presence, density and distribution of Leishmania major rodent reservoir and the development of these rodents is known to have a significant dependence on environmental and climate factors. ZCL in Tunisia is one of the most common forms of leishmaniasis. The aim of this paper was to build a regression model of ZCL cases to identify the relationship between ZCL occurrence and possible risk factors, and to develop a predicting model for ZCL's control and prevention purposes. Monthly reported ZCL cases, environmental and bioclimatic data were collected over 6 years (2009–2015). Three rural areas in the governorate of Sidi Bouzid were selected as the study area. Cross-correlation analysis was used to identify the relevant lagged effects of possible risk factors, associated with ZCL cases. Non-parametric modeling techniques known as generalized additive model (GAM) and generalized additive mixed models (GAMM) were applied in this work. These techniques have the ability to approximate the relationship between the predictors (inputs) and the response variable (output), and express the relationship mathematically. The goodness-of-fit of the constructed model was determined by Generalized cross-validation (GCV) score and residual test. There were a total of 1019 notified ZCL cases from July 2009 to June 2015. The results showed seasonal distribution of reported ZCL cases from August to January. The model highlighted that rodent density, average temperature, cumulative rainfall and average relative humidity, with different time lags, all play role in sustaining and increasing the ZCL incidence. The GAMM model could be applied to predict the occurrence of ZCL in central Tunisia and could help for the establishment of an early warning system to control and prevent ZCL in central Tunisia. PMID:28841642
Talmoudi, Khouloud; Bellali, Hedia; Ben-Alaya, Nissaf; Saez, Marc; Malouche, Dhafer; Chahed, Mohamed Kouni
2017-08-01
Transmission of zoonotic cutaneous leishmaniasis (ZCL) depends on the presence, density and distribution of Leishmania major rodent reservoir and the development of these rodents is known to have a significant dependence on environmental and climate factors. ZCL in Tunisia is one of the most common forms of leishmaniasis. The aim of this paper was to build a regression model of ZCL cases to identify the relationship between ZCL occurrence and possible risk factors, and to develop a predicting model for ZCL's control and prevention purposes. Monthly reported ZCL cases, environmental and bioclimatic data were collected over 6 years (2009-2015). Three rural areas in the governorate of Sidi Bouzid were selected as the study area. Cross-correlation analysis was used to identify the relevant lagged effects of possible risk factors, associated with ZCL cases. Non-parametric modeling techniques known as generalized additive model (GAM) and generalized additive mixed models (GAMM) were applied in this work. These techniques have the ability to approximate the relationship between the predictors (inputs) and the response variable (output), and express the relationship mathematically. The goodness-of-fit of the constructed model was determined by Generalized cross-validation (GCV) score and residual test. There were a total of 1019 notified ZCL cases from July 2009 to June 2015. The results showed seasonal distribution of reported ZCL cases from August to January. The model highlighted that rodent density, average temperature, cumulative rainfall and average relative humidity, with different time lags, all play role in sustaining and increasing the ZCL incidence. The GAMM model could be applied to predict the occurrence of ZCL in central Tunisia and could help for the establishment of an early warning system to control and prevent ZCL in central Tunisia.
NASA Technical Reports Server (NTRS)
Koch, Steven E.; Mcqueen, Jeffery T.
1987-01-01
A survey of various one- and two-way interactive nested grid techniques used in hydrostatic numerical weather prediction models is presented and the advantages and disadvantages of each method are discussed. The techniques for specifying the lateral boundary conditions for each nested grid scheme are described in detail. Averaging and interpolation techniques used when applying the coarse mesh grid (CMG) and fine mesh grid (FMG) interface conditions during two-way nesting are discussed separately. The survey shows that errors are commonly generated at the boundary between the CMG and FMG due to boundary formulation or specification discrepancies. Methods used to control this noise include application of smoothers, enhanced diffusion, or damping-type time integration schemes to model variables. The results from this survey provide the information needed to decide which one-way and two-way nested grid schemes merit future testing with the Mesoscale Atmospheric Simulation System (MASS) model. An analytically specified baroclinic wave will be used to conduct systematic tests of the chosen schemes since this will allow for objective determination of the interfacial noise in the kind of meteorological setting for which MASS is designed. Sample diagnostic plots from initial tests using the analytic wave are presented to illustrate how the model-generated noise is ascertained. These plots will be used to compare the accuracy of the various nesting schemes when incorporated into the MASS model.
Beating-heart registration for organ-mounted robots.
Wood, Nathan A; Schwartzman, David; Passineau, Michael J; Moraca, Robert J; Zenati, Marco A; Riviere, Cameron N
2018-03-06
Organ-mounted robots address the problem of beating-heart surgery by adhering to the heart, passively providing a platform that approaches zero relative motion. Because of the quasi-periodic deformation of the heart due to heartbeat and respiration, registration must address not only spatial registration but also temporal registration. Motion data were collected in the porcine model in vivo (N = 6). Fourier series models of heart motion were developed. By comparing registrations generated using an iterative closest-point approach at different phases of respiration, the phase corresponding to minimum registration distance is identified. The spatiotemporal registration technique presented here reduces registration error by an average of 4.2 mm over the 6 trials, in comparison with a more simplistic static registration that merely averages out the physiological motion. An empirical metric for spatiotemporal registration of organ-mounted robots is defined and demonstrated using data from animal models in vivo. Copyright © 2018 John Wiley & Sons, Ltd.
Apps to promote physical activity among adults: a review and content analysis
2014-01-01
Background In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. Methods The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. Results On average, the reviewed apps included 5 behavior change techniques (range 2–8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. Conclusions The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions. PMID:25059981
Kamimura, Emi; Tanaka, Shinpei; Takaba, Masayuki; Tachi, Keita; Baba, Kazuyoshi
2017-01-01
The aim of this study was to evaluate and compare the inter-operator reproducibility of three-dimensional (3D) images of teeth captured by a digital impression technique to a conventional impression technique in vivo. Twelve participants with complete natural dentition were included in this study. A digital impression of the mandibular molars of these participants was made by two operators with different levels of clinical experience, 3 or 16 years, using an intra-oral scanner (Lava COS, 3M ESPE). A silicone impression also was made by the same operators using the double mix impression technique (Imprint3, 3M ESPE). Stereolithography (STL) data were directly exported from the Lava COS system, while STL data of a plaster model made from silicone impression were captured by a three-dimensional (3D) laboratory scanner (D810, 3shape). The STL datasets recorded by two different operators were compared using 3D evaluation software and superimposed using the best-fit-algorithm method (least-squares method, PolyWorks, InnovMetric Software) for each impression technique. Inter-operator reproducibility as evaluated by average discrepancies of corresponding 3D data was compared between the two techniques (Wilcoxon signed-rank test). The visual inspection of superimposed datasets revealed that discrepancies between repeated digital impression were smaller than observed with silicone impression. Confirmation was forthcoming from statistical analysis revealing significantly smaller average inter-operator reproducibility using a digital impression technique (0.014± 0.02 mm) than when using a conventional impression technique (0.023 ± 0.01 mm). The results of this in vivo study suggest that inter-operator reproducibility with a digital impression technique may be better than that of a conventional impression technique and is independent of the clinical experience of the operator.
Non-contact thrust stand calibration method for repetitively pulsed electric thrusters.
Wong, Andrea R; Toftul, Alexandra; Polzin, Kurt A; Pearson, J Boise
2012-02-01
A thrust stand calibration technique for use in testing repetitively pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoid to produce a pulsed magnetic field that acts against a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasi-steady average deflection of the thrust stand arm away from the unforced or "zero" position can be related to the average applied force through a simple linear Hooke's law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other. The overall error on the linear regression fit used to determine the calibration coefficient was roughly 1%.
NASA Astrophysics Data System (ADS)
Samhouri, M.; Al-Ghandoor, A.; Fouad, R. H.
2009-08-01
In this study two techniques, for modeling electricity consumption of the Jordanian industrial sector, are presented: (i) multivariate linear regression and (ii) neuro-fuzzy models. Electricity consumption is modeled as function of different variables such as number of establishments, number of employees, electricity tariff, prevailing fuel prices, production outputs, capacity utilizations, and structural effects. It was found that industrial production and capacity utilization are the most important variables that have significant effect on future electrical power demand. The results showed that both the multivariate linear regression and neuro-fuzzy models are generally comparable and can be used adequately to simulate industrial electricity consumption. However, comparison that is based on the square root average squared error of data suggests that the neuro-fuzzy model performs slightly better for future prediction of electricity consumption than the multivariate linear regression model. Such results are in full agreement with similar work, using different methods, for other countries.
Source apportionment of speciated PM10 in the United Kingdom in 2008: Episodes and annual averages
NASA Astrophysics Data System (ADS)
Redington, A. L.; Witham, C. S.; Hort, M. C.
2016-11-01
The Lagrangian atmospheric dispersion model NAME (Numerical Atmospheric-dispersion Modelling Environment), has been used to simulate the formation and transport of PM10 over North-West Europe in 2008. The model has been evaluated against UK measurement data and been shown to adequately represent the observed PM10 at rural and urban sites on a daily basis. The Lagrangian nature of the model allows information on the origin of pollutants (and hence their secondary products) to be retained to allow attribution of pollutants at receptor sites back to their sources. This source apportionment technique has been employed to determine whether the different components of the modelled PM10 have originated from UK, shipping, European (excluding the UK) or background sources. For the first time this has been done to evaluate the composition during periods of elevated PM10 as well as the annual average composition. The episode data were determined by selecting the model data for each hour when the corresponding measurement data was >50 μg/m3. All the modelled sites show an increase in European pollution contribution and a decrease in the background contribution in the episode case compared to the annual average. The European contribution is greatest in southern and eastern parts of the UK and decreases moving northwards and westwards. Analysis of the speciated attribution data over the selected sites reveals that for 2008, as an annual average, the top three contributors to total PM10 are UK primary PM10 (17-25%), UK origin nitrate aerosol (18-21%) and background PM10 (11-16%). Under episode conditions the top three contributors to modelled PM10 are UK origin nitrate aerosol (12-33%), European origin nitrate aerosol (11-19%) and UK primary PM10 (12-18%).
Rezaei-Darzi, Ehsan; Farzadfar, Farshad; Hashemi-Meshkini, Amir; Navidi, Iman; Mahmoudi, Mahmoud; Varmaghani, Mehdi; Mehdipour, Parinaz; Soudi Alamdari, Mahsa; Tayefi, Batool; Naderimagham, Shohreh; Soleymani, Fatemeh; Mesdaghinia, Alireza; Delavari, Alireza; Mohammad, Kazem
2014-12-01
This study aimed to evaluate and compare the prediction accuracy of two data mining techniques, including decision tree and neural network models in labeling diagnosis to gastrointestinal prescriptions in Iran. This study was conducted in three phases: data preparation, training phase, and testing phase. A sample from a database consisting of 23 million pharmacy insurance claim records, from 2004 to 2011 was used, in which a total of 330 prescriptions were assessed and used to train and test the models simultaneously. In the training phase, the selected prescriptions were assessed by both a physician and a pharmacist separately and assigned a diagnosis. To test the performance of each model, a k-fold stratified cross validation was conducted in addition to measuring their sensitivity and specificity. Generally, two methods had very similar accuracies. Considering the weighted average of true positive rate (sensitivity) and true negative rate (specificity), the decision tree had slightly higher accuracy in its ability for correct classification (83.3% and 96% versus 80.3% and 95.1%, respectively). However, when the weighted average of ROC area (AUC between each class and all other classes) was measured, the ANN displayed higher accuracies in predicting the diagnosis (93.8% compared with 90.6%). According to the result of this study, artificial neural network and decision tree model represent similar accuracy in labeling diagnosis to GI prescription.
Evaluation of Flow Biosensor Technology in a Chronically-Instrumented Non-Human Primate Model
NASA Technical Reports Server (NTRS)
Koenig, S. C.; Reister, C.; Schaub, J.; Muniz, G.; Ferguson, T.; Fanton, J. W.
1995-01-01
The Physiology Research Branch of Brooks AFB conducts both human and non-human primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to indentify the particular mechanisms that invoke these responses. Primary investigative research efforts in a non-human primate model require the calculation of total peripheral resistance (TPR), systemic arterial compliance (SAC), and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. We have evaluated commercially available electromagnetic (EMF) and transit-time flow measurement techniques. In vivo and in vitro experiments demonstrated that the average error of these techniques is less than 25 percent for EMF and less than 10 percent for transit-time.
You can run, you can hide: The epidemiology and statistical mechanics of zombies
NASA Astrophysics Data System (ADS)
Alemi, Alexander A.; Bierbaum, Matthew; Myers, Christopher R.; Sethna, James P.
2015-11-01
We use a popular fictional disease, zombies, in order to introduce techniques used in modern epidemiology modeling, and ideas and techniques used in the numerical study of critical phenomena. We consider variants of zombie models, from fully connected continuous time dynamics to a full scale exact stochastic dynamic simulation of a zombie outbreak on the continental United States. Along the way, we offer a closed form analytical expression for the fully connected differential equation, and demonstrate that the single person per site two dimensional square lattice version of zombies lies in the percolation universality class. We end with a quantitative study of the full scale US outbreak, including the average susceptibility of different geographical regions.
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
NASA Astrophysics Data System (ADS)
Ichinose, G. A.; Saikia, C. K.
2007-12-01
We applied the moment tensor (MT) analysis scheme to identify seismic sources using regional seismograms based on the representation theorem for the elastic wave displacement field. This method is applied to estimate the isotropic (ISO) and deviatoric MT components of earthquake, volcanic, and isotropic sources within the Basin and Range Province (BRP) and western US. The ISO components from Hoya, Bexar, Montello and Junction were compared to recently well recorded recent earthquakes near Little Skull Mountain, Scotty's Junction, Eureka Valley, and Fish Lake Valley within southern Nevada. We also examined "dilatational" sources near Mammoth Lakes Caldera and two mine collapses including the August 2007 event in Utah recorded by US Array. Using our formulation we have first implemented the full MT inversion method on long period filtered regional data. We also applied a grid-search technique to solve for the percent deviatoric and %ISO moments. By using the grid-search technique, high-frequency waveforms are used with calibrated velocity models. We modeled the ISO and deviatoric components (spall and tectonic release) as separate events delayed in time or offset in space. Calibrated velocity models helped the resolution of the ISO components and decrease the variance over the average, initial or background velocity models. The centroid location and time shifts are velocity model dependent. Models can be improved as was done in previously published work in which we used an iterative waveform inversion method with regional seismograms from four well recorded and constrained earthquakes. The resulting velocity models reduced the variance between predicted synthetics by about 50 to 80% for frequencies up to 0.5 Hz. Tests indicate that the individual path-specific models perform better at recovering the earthquake MT solutions even after using a sparser distribution of stations than the average or initial models.
Product lifetime, energy efficiency and climate change: A case study of air conditioners in Japan.
Nishijima, Daisuke
2016-10-01
This study proposed a modelling technique for estimating life-cycle CO2 emissions of durable goods by considering changes in product lifetime and energy efficiency. The stock and flow of durable goods was modelled by Weibull lifetime distributions and the trend in annual energy efficiency (i.e., annual electricity consumption) of an "average" durable good was formulated as a reverse logistic curve including a technologically critical value (i.e., limit energy efficiency) with respect to time. I found that when the average product lifetime is reduced, there is a trade-off between the reduction in emissions during product use (use phase), due to the additional purchases of new, more energy-efficient air conditioners, and the increase in emissions arising from the additional production of new air conditioners stimulated by the reduction of the average product lifetime. A scenario analysis focused on residential air conditioners in Japan during 1972-2013 showed that for a reduction of average lifetime of 1 year, if the air conditioner energy efficiency limit can be improved by 1.4% from the estimated current efficiency level, then CO2 emissions can be reduced by approximately the same amount as for an extension of average product lifetime of 1 year. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun
2017-12-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.
Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim
2008-07-19
Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.
NASA Technical Reports Server (NTRS)
Considine, David B.; Logan, Jennifer A.; Olsen, Mark A.
2008-01-01
The NASA Global Modeling Initiative has developed a combined stratosphere/troposphere chemistry and transport model which fully represents the processes governing atmospheric composition near the tropopause. We evaluate model ozone distributions near the tropopause, using two high vertical resolution monthly mean ozone profile climatologies constructed with ozonesonde data, one by averaging on pressure levels and the other relative to the thermal tropopause. Model ozone is high biased at the SH tropical and NH midlatitude tropopause by approx. 45% in a 4 deg. latitude x 5 deg. longitude model simulation. Increasing the resolution to 2 deg. x 2.5 deg. increases the NH tropopause high bias to approx. 60%, but decreases the tropical tropopause bias to approx. 30%, an effect of a better-resolved residual circulation. The tropopause ozone biases appear not to be due to an overly vigorous residual circulation or excessive stratosphere/troposphere exchange, but are more likely due to insufficient vertical resolution or excessive vertical diffusion near the tropopause. In the upper troposphere and lower stratosphere, model/measurement intercomparisons are strongly affected by the averaging technique. NH and tropical mean model lower stratospheric biases are less than 20%. In the upper troposphere, the 2 deg. x 2.5 deg. simulation exhibits mean high biases of approx. 20% and approx. 35% during April in the tropics and NH midlatitudes, respectively, compared to the pressure averaged climatology. However, relative-to-tropopause averaging produces upper troposphere high biases of approx. 30% and 70% in the tropics and NH midlatitudes. This is because relative-to-tropopause averaging better preserves large cross-tropopause O3 gradients, which are seen in the daily sonde data, but not in daily model profiles. The relative annual cycle of ozone near the tropopause is reproduced very well in the model Northern Hemisphere midlatitudes. In the tropics, the model amplitude of the near tropopause annual cycle is weak. This is likely due to the annual amplitude of mean vertical upwelling near the tropopause, which analysis suggests is approx. 30% weaker than in the real atmosphere.
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Averaged model to study long-term dynamics of a probe about Mercury
NASA Astrophysics Data System (ADS)
Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena
2018-02-01
This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen orbits and the temporal evolution of the eccentricity of these orbits.
B. Lane Rivenbark; C. Rhett Jackson
2004-01-01
Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...
Carvajal, Thaddeus M; Viacrusis, Katherine M; Hernandez, Lara Fides T; Ho, Howell T; Amalin, Divina M; Watanabe, Kozo
2018-04-17
Several studies have applied ecological factors such as meteorological variables to develop models and accurately predict the temporal pattern of dengue incidence or occurrence. With the vast amount of studies that investigated this premise, the modeling approaches differ from each study and only use a single statistical technique. It raises the question of whether which technique would be robust and reliable. Hence, our study aims to compare the predictive accuracy of the temporal pattern of Dengue incidence in Metropolitan Manila as influenced by meteorological factors from four modeling techniques, (a) General Additive Modeling, (b) Seasonal Autoregressive Integrated Moving Average with exogenous variables (c) Random Forest and (d) Gradient Boosting. Dengue incidence and meteorological data (flood, precipitation, temperature, southern oscillation index, relative humidity, wind speed and direction) of Metropolitan Manila from January 1, 2009 - December 31, 2013 were obtained from respective government agencies. Two types of datasets were used in the analysis; observed meteorological factors (MF) and its corresponding delayed or lagged effect (LG). After which, these datasets were subjected to the four modeling techniques. The predictive accuracy and variable importance of each modeling technique were calculated and evaluated. Among the statistical modeling techniques, Random Forest showed the best predictive accuracy. Moreover, the delayed or lag effects of the meteorological variables was shown to be the best dataset to use for such purpose. Thus, the model of Random Forest with delayed meteorological effects (RF-LG) was deemed the best among all assessed models. Relative humidity was shown to be the top-most important meteorological factor in the best model. The study exhibited that there are indeed different predictive outcomes generated from each statistical modeling technique and it further revealed that the Random forest model with delayed meteorological effects to be the best in predicting the temporal pattern of Dengue incidence in Metropolitan Manila. It is also noteworthy that the study also identified relative humidity as an important meteorological factor along with rainfall and temperature that can influence this temporal pattern.
NASA Technical Reports Server (NTRS)
Jones, Kenneth M.; Biedron, Robert T.; Whitlock, Mark
1995-01-01
A computational study was performed to determine the predictive capability of a Reynolds averaged Navier-Stokes code (CFL3D) for two-dimensional and three-dimensional multielement high-lift systems. Three configurations were analyzed: a three-element airfoil, a wing with a full span flap and a wing with a partial span flap. In order to accurately model these complex geometries, two different multizonal structured grid techniques were employed. For the airfoil and full span wing configurations, a chimera or overset grid technique was used. The results of the airfoil analysis illustrated that although the absolute values of lift were somewhat in error, the code was able to predict reasonably well the variation with Reynolds number and flap position. The full span flap analysis demonstrated good agreement with experimental surface pressure data over the wing and flap. Multiblock patched grids were used to model the partial span flap wing. A modification to an existing patched- grid algorithm was required to analyze the configuration as modeled. Comparisons with experimental data were very good, indicating the applicability of the patched-grid technique to analyses of these complex geometries.
Hidden Markov models of biological primary sequence information.
Baldi, P; Chauvin, Y; Hunkapiller, T; McClure, M A
1994-01-01
Hidden Markov model (HMM) techniques are used to model families of biological sequences. A smooth and convergent algorithm is introduced to iteratively adapt the transition and emission parameters of the models from the examples in a given family. The HMM approach is applied to three protein families: globins, immunoglobulins, and kinases. In all cases, the models derived capture the important statistical characteristics of the family and can be used for a number of tasks, including multiple alignments, motif detection, and classification. For K sequences of average length N, this approach yields an effective multiple-alignment algorithm which requires O(KN2) operations, linear in the number of sequences. PMID:8302831
The Reynolds-stress tensor in diffusion flames; An experimental and theoretical investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, F.; Janicka, J.
1990-07-01
The authors present measurements and predictions of Reynolds-stress components and mean velocities in a CH{sub 4}-air diffusion flame. A reference beam LDA technique is applied for measuring all Reynolds-stress components. A hologram with dichromated gelatine as recording medium generates strictly coherent reference beams. The theoretical part describes a Reynolds-stress model based on Favre-averaged quantities, paying special attention to modeling the pressure-shear correlation and the dissipation equation in flames. Finally, measurement/prediction comparisons are presented.
Laser power conversion system analysis, volume 1
NASA Technical Reports Server (NTRS)
Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.
1979-01-01
The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.
Comparison of conditional sampling and averaging techniques in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Subramanian, C. S.; Rajagopalan, S.; Antonia, R. A.; Chambers, A. J.
1982-10-01
A rake of cold wires was used in a slightly heated boundary layer to identify coherent temperature fronts. An X-wire/cold-wire arrangement was used simultaneously with the rake to provide measurements of the longitudinal and normal velocity fluctuations and temperature fluctuations. Conditional averages of these parameters and their products were obtained by application of conditional techniques (VITA, HOLE, BT, RA1, and RA3) based on the detection of temperature fronts using information obtained at only one point in space. It is found that none of the one-point detection techniques is in good quantitative agreement with the rake detection technique, the largest correspondence being 51%. Despite the relatively poor correspondence between the conditional techniques, these techniques, with the exception of HOLE, produce conditional averages that are in reasonable qualitative agreement with those deduced using the rake.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Optical pathlengths in dental caries lesions
NASA Astrophysics Data System (ADS)
Mujat, Claudia; ten Bosch, Jaap J.; Dogariu, Aristide C.
2001-04-01
The average pathlength of light inside dental enamel and incipient lesions is measured and compared, in order to quantitatively confirm the prediction that incipient lesions have higher scattering coefficients that sound enamel. The technique used, called optical pathlength spectroscopy provides experimental access to the pathlength distribution of light inside highly scattering samples. This is desirable for complex biological materials, where current theoretical models are very difficult to apply. To minimize the effects of surface reflections the average pathlength is measured in wet sound enamel and white spots. We obtain values of 367 micrometers and 272 micrometers average pathlength for sound enamel and white spots respectively. We also investigate the differences between open and subsurface lesions, by measuring the change in the pathlength distribution of light as they go from dry to wet.
ERIC Educational Resources Information Center
Pridemore, William Alex; Trahan, Adam; Chamlin, Mitchell B.
2009-01-01
There is substantial evidence of detrimental psychological sequelae following disasters, including terrorist attacks. The effect of these events on extreme responses such as suicide, however, is unclear. We tested competing hypotheses about such effects by employing autoregressive integrated moving average techniques to model the impact of…
ARM Best Estimate Data (ARMBE) Products for Climate Science for a Sustainable Energy Future (CSSEF)
Riihimaki, Laura; Gaustad, Krista; McFarlane, Sally
2014-06-12
This data set was created for the Climate Science for a Sustainable Energy Future (CSSEF) model testbed project and is an extension of the hourly average ARMBE dataset to other extended facility sites and to include uncertainty estimates. Uncertainty estimates were needed in order to use uncertainty quantification (UQ) techniques with the data.
Is cepstrum averaging applicable to circularly polarized electric-field data?
NASA Astrophysics Data System (ADS)
Tunnell, T.
1990-04-01
In FY 1988 a cepstrum averaging technique was developed to eliminate the ground reflections from charged particle beam (CPB) electromagnetic pulse (EMP) data. The work was done for the Los Alamos National Laboratory Project DEWPOINT at SST-7. The technique averages the cepstra of horizontally and vertically polarized electric field data (i.e., linearly polarized electric field data). This cepstrum averaging technique was programmed into the FORTRAN codes CEP and CEPSIM. Steve Knox, the principal investigator for Project DEWPOINT, asked the authors to determine if the cepstrum averaging technique could be applied to circularly polarized electric field data. The answer is, Yes, but some modifications may be necessary. There are two aspects to this answer that we need to address, namely, the Yes and the modifications. First, regarding the Yes, the technique is applicable to elliptically polarized electric field data in general: circular polarization is a special case of elliptical polarization. Secondly, regarding the modifications, greater care may be required in computing the phase in the calculation of the complex logarithm. The calculation of the complex logarithm is the most critical step in cepstrum-based analysis. This memorandum documents these findings.
Variability in surface ECG morphology: signal or noise?
NASA Technical Reports Server (NTRS)
Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.
1988-01-01
Using data collected from canine models of acute myocardial ischemia, we investigated two issues of major relevance to electrocardiographic signal averaging: ECG epoch alignment, and the spectral characteristics of the beat-to-beat variability in ECG morphology. With initial digitization rates of 1 kHz, an iterative a posteriori matched filtering alignment scheme, and linear interpolation, we demonstrated that there is sufficient information in the body surface ECG to merit alignment to a precision of 0.1 msecs. Applying this technique to align QRS complexes and atrial pacing artifacts independently, we demonstrated that the conduction delay from atrial stimulus to ventricular activation may be so variable as to preclude using atrial pacing as an alignment mechanism, and that this variability in conduction time be modulated at the frequency of respiration and at a much lower frequency (0.02-0.03Hz). Using a multidimensional spectral technique, we investigated the beat-to-beat variability in ECG morphology, demonstrating that the frequency spectrum of ECG morphological variation reveals a readily discernable modulation at the frequency of respiration. In addition, this technique detects a subtle beat-to-beat alternation in surface ECG morphology which accompanies transient coronary artery occlusion. We conclude that physiologically important information may be stored in the variability in the surface electrocardiogram, and that this information is lost by conventional averaging techniques.
Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.
Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A
2011-11-01
Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification of normal or pathological gait. Improved techniques to allow non-uniform scaling of generic models to more accurately reflect subject-specific bone geometries and anatomical reference frames may reduce differences between bone pose estimation techniques and allow for comparison across gait analysis platforms.
Is Memory Search Governed by Universal Principles or Idiosyncratic Strategies?
Healey, M. Karl; Kahana, Michael J.
2013-01-01
Laboratory paradigms have provided an empirical foundation for much of psychological science. Some have argued, however, that such paradigms are highly susceptible to idiosyncratic strategies and that rather than reflecting fundamental cognitive principles, many findings are artifacts of averaging across participants who employ different strategies. We develop a set of techniques to rigorously test the extent to which average data are distorted by such strategy differences and apply these techniques to free recall data from the Penn Electrophysiology of Encoding and Retrieval Study (PEERS). Recall initiation showed evidence of subgroups: the majority of participants initiate recall from the last item in the list, but one subgroup show elevated initiation probabilities for items 2–4 back from the end of the list and another showed elevated probabilities for the beginning of the list. By contrast, serial position curves and temporal and semantic clustering functions were remarkably consistent, with almost every participant exhibiting a recognizable version of the average function, suggesting that these functions reflect fundamental principles of the memory system. The approach taken here can serve as a model for evaluating the extent to which other laboratory paradigms are influenced by individual differences in strategy use. PMID:23957279
In Vivo Measurement of Glenohumeral Joint Contact Patterns
NASA Astrophysics Data System (ADS)
Bey, Michael J.; Kline, Stephanie K.; Zauel, Roger; Kolowich, Patricia A.; Lock, Terrence R.
2009-12-01
The objectives of this study were to describe a technique for measuring in-vivo glenohumeral joint contact patterns during dynamic activities and to demonstrate application of this technique. The experimental technique calculated joint contact patterns by combining CT-based 3D bone models with joint motion data that were accurately measured from biplane x-ray images. Joint contact patterns were calculated for the repaired and contralateral shoulders of 20 patients who had undergone rotator cuff repair. Significant differences in joint contact patterns were detected due to abduction angle and shoulder condition (i.e., repaired versus contralateral). Abduction angle had a significant effect on the superior/inferior contact center position, with the average joint contact center of the repaired shoulder 12.1% higher on the glenoid than the contralateral shoulder. This technique provides clinically relevant information by calculating in-vivo joint contact patterns during dynamic conditions and overcomes many limitations associated with conventional techniques for quantifying joint mechanics.
THE ADAPTATION FOR GROUP CLASSROOM USE OF CLINICAL TECHNIQUES FOR TEACHING BRAIN-INJURED CHILDREN.
ERIC Educational Resources Information Center
NOVACK, HARRY S.
THIS STUDY SOUGHT TO DEVELOP A PUBLIC SCHOOL PROGRAM FOR BRAIN-INJURED CHILDREN OF AVERAGE OR LOW AVERAGE INTELLECTUAL POTENTIAL. THE OBJECTIVES WERE--(1) TO COLLECT CLINICAL TUTORING TECHNIQUES BEING USED, (2) TO CLASSIFY CLINICAL TUTORIAL METHODS IN A FRAMEWORK USEFUL FOR DEVELOPING TECHNIQUES FOR GROUP TEACHING, (3) TO ADOPT CLINICAL TUTORIAL…
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
Analysis of the multigroup model for muon tomography based threat detection
NASA Astrophysics Data System (ADS)
Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.
2014-02-01
We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.
NASA Astrophysics Data System (ADS)
Thomas, Valerie Anne
This research models canopy-scale photosynthesis at the Groundhog River Flux Site through the integration of high-resolution airborne remote sensing data and micrometeorological measurements collected from a flux tower. Light detection and ranging (lidar) data are analysed to derive models of tree structure, including: canopy height, basal area, crown closure, and average aboveground biomass. Lidar and hyperspectral remote sensing data are used to model canopy chlorophyll (Chl) and carotenoid concentrations (known to be good indicators of photosynthesis). The integration of lidar and hyperspectral data is applied to derive spatially explicit models of the fraction of photosynthetically active radiation (fPAR) absorbed by the canopy as well as a species classification for the site. These products are integrated with flux tower meteorological measurements (i.e., air temperature and global solar radiation) collected on a continuous basis over 2004 to apply the C-Fix model of carbon exchange to the site. Results demonstrate that high resolution lidar and lidar-hyperspectral integration techniques perform well in the boreal mixedwood environment. Lidar models are well correlated with forest structure, despite the complexities introduced in the mixedwood case (e.g., r2=0.84, 0.89, 0.60, and 0.91, for mean dominant height, basal area, crown closure, and average aboveground biomass). Strong relationships are also shown for canopy scale chlorophyll/carotenoid concentration analysis using integrated lidar-hyperspectral techniques (e.g., r2=0.84, 0.84, and 0.82 for Chl(a), Chl(a+b), and Chl(b)). Examination of the spatially explicit models of fPAR reveal distinct spatial patterns which become increasingly apparent throughout the season due to the variation in species groupings (and canopy chlorophyll concentration) within the 1 km radius surrounding the flux tower. Comparison of results from the modified local-scale version of the C-Fix model to tower gross ecosystem productivity (GEP) demonstrate a good correlation to flux tower measured GEP (r2=0.70 for 10 day averages), with the largest deviations occurring in June-July. This research has direct benefits for forest inventory mapping and management practices; mapping of canopy physiology and biochemical constituents related to forest health; and scaling and direct comparison to large resolution satellite models to help bridge the gap between the local-scale measurements at flux towers and predictions derived from continental-scale carbon models.
Melt Flow Control in the Directional Solidification of Binary Alloys
NASA Technical Reports Server (NTRS)
Zabaras, Nicholas
2003-01-01
Our main project objectives are to develop computational techniques based on inverse problem theory that can be used to design directional solidification processes that lead to desired temperature gradient and growth conditions at the freezing front at various levels of gravity. It is known that control of these conditions plays a significant role in the selection of the form and scale of the obtained solidification microstructures. Emphasis is given on the control of the effects of various melt flow mechanisms on the local to the solidification front conditions. The thermal boundary conditions (furnace design) as well as the magnitude and direction of an externally applied magnetic field are the main design variables. We will highlight computational design models for sharp front solidification models and briefly discuss work in progress toward the development of design techniques for multi-phase volume-averaging based solidification models.
Fault zone structure determined through the analysis of earthquake arrival times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelini, A.
1991-10-01
This thesis develops and applies a technique for the simultaneous determination of P and S wave velocity models and hypocenters from a set of arrival times. The velocity models are parameterized in terms of cubic B-splines basis functions which permit the retrieval of smooth models that can be used directly for generation of synthetic seismograms using the ray method. In addition, this type of smoothing limits the rise of instabilities related to the poor resolving power of the data. V{sub P}/V{sub S} ratios calculated from P and S models display generally instabilities related to the different ray-coverages of compressional andmore » shear waves. However, V{sub P}/V{sub S} ratios are important for correct identification of rock types and this study introduces a new methodology based on adding some coupling (i.e., proportionality) between P and S models which stabilizes the V{sub P}/V{sub S} models around some average preset value determined from the data. Tests of the technique with synthetic data show that this additional coupling regularizes effectively the resulting models.« less
Fault zone structure determined through the analysis of earthquake arrival times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelini, Alberto
1991-10-01
This thesis develops and applies a technique for the simultaneous determination of P and S wave velocity models and hypocenters from a set of arrival times. The velocity models are parameterized in terms of cubic B-splines basis functions which permit the retrieval of smooth models that can be used directly for generation of synthetic seismograms using the ray method. In addition, this type of smoothing limits the rise of instabilities related to the poor resolving power of the data. V P/V S ratios calculated from P and S models display generally instabilities related to the different ray-coverages of compressional andmore » shear waves. However, V P/V S ratios are important for correct identification of rock types and this study introduces a new methodology based on adding some coupling (i.e., proportionality) between P and S models which stabilizes the V P/V S models around some average preset value determined from the data. Tests of the technique with synthetic data show that this additional coupling regularizes effectively the resulting models.« less
Sitepu, Monika S; Kaewkungwal, Jaranit; Luplerdlop, Nathanej; Soonthornworasiri, Ngamphol; Silawan, Tassanee; Poungsombat, Supawadee; Lawpoolsri, Saranath
2013-03-01
This study aimed to describe the temporal patterns of dengue transmission in Jakarta from 2001 to 2010, using data from the national surveillance system. The Box-Jenkins forecasting technique was used to develop a seasonal autoregressive integrated moving average (SARIMA) model for the study period and subsequently applied to forecast DHF incidence in 2011 in Jakarta Utara, Jakarta Pusat, Jakarta Barat, and the municipalities of Jakarta Province. Dengue incidence in 2011, based on the forecasting model was predicted to increase from the previous year.
Progressive matrix cracking in off-axis plies of a general symmetric laminate
NASA Technical Reports Server (NTRS)
Thomas, David J.; Wetherhold, Robert C.
1993-01-01
A generalized shear-lag model is derived to determine the average through-the-thickness stress state present in a layer undergoing transverse matrix cracking, by extending the method of Lee and Daniels (1991) to a general symmetric multilayered system. The model is capable of considering cracking in layers of arbitrary orientation, states of general in-plane applied loading, and laminates with a general symmetric stacking sequence. The model is included in a computer program designed for probabilistic laminate analysis, and the results are compared to those determined with the ply drop-off technique.
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Improving consensus structure by eliminating averaging artifacts
KC, Dukka B
2009-01-01
Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905
NASA Technical Reports Server (NTRS)
Stutzman, Warren L.
1989-01-01
This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.
Parameterisation of multi-scale continuum perfusion models from discrete vascular networks.
Hyde, Eoin R; Michler, Christian; Lee, Jack; Cookson, Andrew N; Chabiniok, Radek; Nordsletten, David A; Smith, Nicolas P
2013-05-01
Experimental data and advanced imaging techniques are increasingly enabling the extraction of detailed vascular anatomy from biological tissues. Incorporation of anatomical data within perfusion models is non-trivial, due to heterogeneous vessel density and disparate radii scales. Furthermore, previous idealised networks have assumed a spatially repeating motif or periodic canonical cell, thereby allowing for a flow solution via homogenisation. However, such periodicity is not observed throughout anatomical networks. In this study, we apply various spatial averaging methods to discrete vascular geometries in order to parameterise a continuum model of perfusion. Specifically, a multi-compartment Darcy model was used to provide vascular scale separation for the fluid flow. Permeability tensor fields were derived from both synthetic and anatomically realistic networks using (1) porosity-scaled isotropic, (2) Huyghe and Van Campen, and (3) projected-PCA methods. The Darcy pressure fields were compared via a root-mean-square error metric to an averaged Poiseuille pressure solution over the same domain. The method of Huyghe and Van Campen performed better than the other two methods in all simulations, even for relatively coarse networks. Furthermore, inter-compartment volumetric flux fields, determined using the spatially averaged discrete flux per unit pressure difference, were shown to be accurate across a range of pressure boundary conditions. This work justifies the application of continuum flow models to characterise perfusion resulting from flow in an underlying vascular network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
One way Doppler extractor. Volume 1: Vernier technique
NASA Technical Reports Server (NTRS)
Blasco, R. W.; Klein, S.; Nossen, E. J.; Starner, E. R.; Yanosov, J. A.
1974-01-01
A feasibility analysis, trade-offs, and implementation for a One Way Doppler Extraction system are discussed. A Doppler error analysis shows that quantization error is a primary source of Doppler measurement error. Several competing extraction techniques are compared and a Vernier technique is developed which obtains high Doppler resolution with low speed logic. Parameter trade-offs and sensitivities for the Vernier technique are analyzed, leading to a hardware design configuration. A detailed design, operation, and performance evaluation of the resulting breadboard model is presented which verifies the theoretical performance predictions. Performance tests have verified that the breadboard is capable of extracting Doppler, on an S-band signal, to an accuracy of less than 0.02 Hertz for a one second averaging period. This corresponds to a range rate error of no more than 3 millimeters per second.
Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan
2010-08-26
This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports. 2010 Elsevier Ltd. All rights reserved.
Modelling vehicle colour and pattern for multiple deployment environments
NASA Astrophysics Data System (ADS)
Liggins, Eric; Moorhead, Ian R.; Pearce, Daniel A.; Baker, Christopher J.; Serle, William P.
2016-10-01
Military land platforms are often deployed around the world in very different climate zones. Procuring vehicles in a large range of camouflage patterns and colour schemes is expensive and may limit the environments in which they can be effectively used. As such this paper reports a modelling approach for use in the optimisation and selection of a colour palette, to support operations in diverse environments and terrains. Three different techniques were considered based upon the differences between vehicle and background in L*a*b* colour space, to predict the optimum (initially single) colour to reduce the vehicle signature in the visible band. Calibrated digital imagery was used as backgrounds and a number of scenes were sampled. The three approaches used, and reported here are a) background averaging behind the vehicle b) background averaging in the area surrounding the vehicle and c) use of the spatial extension to CIE L*a*b*; S-CIELAB (Zhang and Wandell, Society for Information Display Symposium Technical Digest, vol. 27, pp. 731-734, 1996). Results are compared with natural scene colour statistics. The models used showed good agreement in the colour predictions for individual and multiple terrains or climate zones. A further development of the technique examines the effect of different patterns and colour combinations on the S-CIELAB spatial colour difference metric, when scaled for appropriate viewing ranges.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
NASA Astrophysics Data System (ADS)
Cackett, Edward; Troyer, Jon; Peille, Philippe; Barret, Didier
2018-01-01
Kilohertz quasi-periodic oscillations or kHz QPOs are intensity variations that occur in the X-ray band observed in neutron star low-mass X-ray binary (LMXB) systems. In such systems, matter is transferred from a secondary low-mass star to a neutron star via the process of accretion. kHz QPOs occur on the timescale of the inner accretion flow and may carry signatures of the physics of strong gravity (c2 ~ GM/R) and possibly clues to constraining the neutron star equation of state (EOS). Both the timing behavior of kHz QPOs and the time-averaged spectra of these systems have been studied extensively. No model derived from these techniques has been able to illuminate the origin of kHz QPOs. Spectral-timing is an analysis technique that can be used to derive information about the nature of physical processes occurring within the accretion flow on the timescale of the kHz QPO. To date, kHz QPOs of (4) neutron star LMXB systems have been studied with spectral-timing techniques. We present a comprehensive study of spectral-timing products of kHz QPOs from systems where data is available in the RXTE archive to demonstrate the promise of this technique to gain insights regarding the origin of kHz QPOs. Using data averaged over the entire RXTE archive, we show correlated time-lags as a function of QPO frequency and energy, as well as energy-dependent covariance spectra for the various LMXB systems where spectral-timing analysis is possible. We find similar trends in all average spectral-timing products for the objects studied. This suggests a common origin of kHz QPOs.
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H
2017-12-19
Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
Center for Advanced Propulsion Systems
1993-02-01
breakup model for two chamber pressures. 7.5.4 Exciplex images for a single main injection. Images are 126 ensemble averaged for 8 individual images. Times...to obtain data using an electronic fuel injector (UCORS). Exciplex fluorescence and photographic imaging were used to study liquid and vapor...later paper, (Bower and Foster, 1993) in the same combustion bomb, the authors applied Exciplex fluorescence techniques to visualize fuel liquid and fuel
ERIC Educational Resources Information Center
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan Luis
2017-01-01
The present study, based on the construct comparability approach, performs a comparative analysis of general points average for seven courses, using exploratory factor analysis (EFA) and the Partial Credit model (PCM) with a sample of 1398 student subjects (M = 12.5, SD = 0.67) from 8 schools in the province of Alicante (Spain). EFA confirmed a…
Detection of stress factors in crop and weed species using hyperspectral remote sensing reflectance
NASA Astrophysics Data System (ADS)
Henry, William Brien
The primary objective of this work was to determine if stress factors such as moisture stress or herbicide injury stress limit the ability to distinguish between weeds and crops using remotely sensed data. Additional objectives included using hyperspectral reflectance data to measure moisture content within a species, and to measure crop injury in response to drift rates of non-selective herbicides. Moisture stress did not reduce the ability to discriminate between species. Regardless of analysis technique, the trend was that as moisture stress increased, so too did the ability to distinguish between species. Signature amplitudes (SA) of the top 5 bands, discrete wavelet transforms (DWT), and multiple indices were promising analysis techniques. Discriminant models created from one year's data set and validated on additional data sets provided, on average, approximately 80% accurate classification among weeds and crop. This suggests that these models are relatively robust and could potentially be used across environmental conditions in field scenarios. Distinguishing between leaves grown at high-moisture stress and no-stress was met with limited success, primarily because there was substantial variation among samples within the treatments. Leaf water potential (LWP) was measured, and these were classified into three categories using indices. Classification accuracies were as high as 68%. The 10 bands most highly correlated to LWP were selected; however, there were no obvious trends or patterns in these top 10 bands with respect to time, species or moisture level, suggesting that LWP is an elusive parameter to quantify spectrally. In order to address herbicide injury stress and its impact on species discrimination, discriminant models were created from combinations of multiple indices. The model created from the second experimental run's data set and validated on the first experimental run's data provided an average of 97% correct classification of soybean and an overall average classification accuracy of 65% for all species. This suggests that these models are relatively robust and could potentially be used across a wide range of herbicide applications in field scenarios. From the pooled data set, a single discriminant model was created with multiple indices that discriminated soybean from weeds 88%, on average, regardless of herbicide, rate or species. Several analysis techniques including multiple indices, signature amplitude with spectral bands as features, and wavelet analysis were employed to distinguish between herbicide-treated and nontreated plants. Classification accuracy using signature amplitude (SA) analysis of paraquat injury on soybean was better than 75% for both 1/2 and 1/8X rates at 1, 4, and 7 DAA. Classification accuracy of paraquat injury on corn was better than 72% for the 1/2X rate at 1, 4, and 7 DAA. These data suggest that hyperspectral reflectance may be used to distinguish between healthy plants and injured plants to which herbicides have been applied; however, the classification accuracies remained at 75% or higher only when the higher rates of herbicide were applied. (Abstract shortened by UMI.)
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
DSPI technique for nanometer vibration mode measurement
NASA Astrophysics Data System (ADS)
Yue, Kaiduan; Jia, Shuhai; Tan, Yushan
2000-05-01
A time-average DSPI method for nanometer vibration mode measurement is presented in this paper. The phase continuous scan technique is combined with the Bessel fringe-shifting technique to quantitatively analyze the vibration mode by time-average DSPI is used in measurement system. Through the phase continuous scan, the background and speckle items are completely eliminated, which improves the fringe quality and enhances the signal-to-noise ratio of interferogram. There is no need to calibrate the optical phase-shifter exactly in this method. The anti-disturbance capability of this method is higher than that of the phase-stepping technique, so it is robust and easy to be used. In the vibration measurement system, the speckle average technology is used, so the high quality measuring results are obtained.
Basaruddin, T.
2016-01-01
One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645. PMID:27843447
Airborne laser scanning for forest health status assessment and radiative transfer modelling
NASA Astrophysics Data System (ADS)
Novotny, Jan; Zemek, Frantisek; Pikl, Miroslav; Janoutova, Ruzena
2013-04-01
Structural parameters of forest stands/ecosystems are an important complementary source of information to spectral signatures obtained from airborne imaging spectroscopy when quantitative assessment of forest stands are in the focus, such as estimation of forest biomass, biochemical properties (e.g. chlorophyll /water content), etc. The parameterization of radiative transfer (RT) models used in latter case requires three-dimensional spatial distribution of green foliage and woody biomass. Airborne LiDAR data acquired over forest sites bears these kinds of 3D information. The main objective of the study was to compare the results from several approaches to interpolation of digital elevation model (DEM) and digital surface model (DSM). We worked with airborne LiDAR data with different density (TopEye Mk II 1,064nm instrument, 1-5 points/m2) acquired over the Norway spruce forests situated in the Beskydy Mountains, the Czech Republic. Three different interpolation algorithms with increasing complexity were tested: i/Nearest neighbour approach implemented in the BCAL software package (Idaho Univ.); ii/Averaging and linear interpolation techniques used in the OPALS software (Vienna Univ. of Technology); iii/Active contour technique implemented in the TreeVis software (Univ. of Freiburg). We defined two spatial resolutions for the resulting coupled raster DEMs and DSMs outputs: 0.4 m and 1 m, calculated by each algorithm. The grids correspond to the same spatial resolutions of hyperspectral imagery data for which the DEMs were used in a/geometrical correction and b/building a complex tree models for radiative transfer modelling. We applied two types of analyses when comparing between results from the different interpolations/raster resolution: 1/calculated DEM or DSM between themselves; 2/comparison with field data: DEM with measurements from referential GPS, DSM - field tree alometric measurements, where tree height was calculated as DSM-DEM. The results of the analyses show that: 1/averaging techniques tend to underestimate the tree height and the generated surface does not follow the first LiDAR echoes both for 1 m and 0.4 m pixel size; 2/we did not find any significant difference between tree heights calculated by nearest neighbour algorithm and the active contour technique for 1 m pixel output but the difference increased with finer resolution (0.4 m); 3/the accuracy of the DEMs calculated by tested algorithms is similar.
Macrocell path loss prediction using artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.
2014-04-01
The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta
2004-01-01
A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.
Heat conduction in periodic laminates with probabilistic distribution of material properties
NASA Astrophysics Data System (ADS)
Ostrowski, Piotr; Jędrysiak, Jarosław
2017-04-01
This contribution deals with a problem of heat conduction in a two-phase laminate made of periodically distributed micro-laminas along one direction. In general, the Fourier's Law describing the heat conduction in a considered composite has highly oscillating and discontinuous coefficients. Therefore, the tolerance averaging technique (cf. Woźniak et al. in Thermomechanics of microheterogeneous solids and structures. Monografie - Politechnika Łódzka, Wydawnictwo Politechniki Łódzkiej, Łódź, 2008) is applied. Based on this technique, the averaged differential equations for a tolerance-asymptotic model are derived and solved analytically for given initial-boundary conditions. The second part of this contribution is an investigation of the effect of material properties ratio ω of two components on the total temperature field θ, by the assumption that conductivities of micro-laminas are not necessary uniquely described. Numerical experiments (Monte Carlo simulation) are executed under assumption that ω is a random variable with a fixed probability distribution. At the end, based on the obtained results, a crucial hypothesis is formulated.
Creation of the BMA ensemble for SST using a parallel processing technique
NASA Astrophysics Data System (ADS)
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
NASA Astrophysics Data System (ADS)
Shih, D.; Yeh, G.
2009-12-01
This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.
Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Hathaway, A. W.
1978-01-01
Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.
Titus, Jitto; Viennois, Emilie; Merlin, Didier; Perera, A. G. Unil
2016-01-01
This article describes a rapid, simple and cost-effective technique that could lead to a screening method for colitis without the need for biopsies or in vivo measurements. This screening technique includes the testing of serum using Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) spectroscopy for the colitis-induced increased presence of mannose. Chronic (Interleukin 10 knockout) and acute (Dextran Sodium Sulphate-induced) models for colitis are tested using the ATR-FTIR technique. Arthritis (Collagen Antibody Induced Arthritis) and metabolic syndrome (Toll like receptor 5 knockout) models are also tested as controls. The marker identified as mannose uniquely screens and distinguishes the colitic from the non-colitic samples and the controls. The reference or the baseline spectrum could be the pooled and averaged spectra of non-colitic samples or the subject's previous sample spectrum. This shows the potential of having individualized route maps of disease status, leading to personalized diagnosis and drug management. PMID:27094092
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
Passive Super-Low Frequency electromagnetic prospecting technique
NASA Astrophysics Data System (ADS)
Wang, Nan; Zhao, Shanshan; Hui, Jian; Qin, Qiming
2017-03-01
The Super-Low Frequency (SLF) electromagnetic prospecting technique, adopted as a non-imaging remote sensing tool for depth sounding, is systematically proposed for subsurface geological survey. In this paper, we propose and theoretically illustrate natural source magnetic amplitudes as SLF responses for the first step. In order to directly calculate multi-dimensional theoretical SLF responses, modeling algorithms were developed and evaluated using the finite difference method. The theoretical results of three-dimensional (3-D) models show that the average normalized SLF magnetic amplitude responses were numerically stable and appropriate for practical interpretation. To explore the depth resolution, three-layer models were configured. The modeling results prove that the SLF technique is more sensitive to conductive objective layers than high resistive ones, with the SLF responses of conductive objective layers obviously showing uprising amplitudes in the low frequency range. Afterwards, we proposed an improved Frequency-Depth transformation based on Bostick inversion to realize the depth sounding by empirically adjusting two parameters. The SLF technique has already been successfully applied in geothermal exploration and coalbed methane (CBM) reservoir interpretation, which demonstrates that the proposed methodology is effective in revealing low resistive distributions. Furthermore, it siginificantly contributes to reservoir identification with electromagnetic radiation anomaly extraction. Meanwhile, the SLF interpretation results are in accordance with dynamic production status of CBM reservoirs, which means it could provide an economical, convenient and promising method for exploring and monitoring subsurface geo-objects.
A simplified technique for delivering total body irradiation (TBI) with improved dose homogeneity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao Rui; Bernard, Damian; Turian, Julius
2012-04-15
Purpose: Total body irradiation (TBI) with megavoltage photon beams has been accepted as an important component of management for a number of hematologic malignancies, generally as part of bone marrow conditioning regimens. The purpose of this paper is to present and discuss the authors' TBI technique, which both simplifies the treatment process and improves the treatment quality. Methods: An AP/PA TBI treatment technique to produce uniform dose distributions using sequential collimator reductions during each fraction was implemented, and a sample calculation worksheet is presented. Using this methodology, the dosimetric characteristics of both 6 and 18 MV photon beams, including lungmore » dose under cerrobend blocks was investigated. A method of estimating midplane lung doses based on measured entrance and exit doses was proposed, and the estimated results were compared with measurements. Results: Whole body midplane dose uniformity of {+-}10% was achieved with no more than two collimator-based beam modulations. The proposed model predicted midplane lung doses 5% to 10% higher than the measured doses for 6 and 18 MV beams. The estimated total midplane doses were within {+-}5% of the prescribed midplane dose on average except for the lungs where the doses were 6% to 10% lower than the prescribed dose on average. Conclusions: The proposed TBI technique can achieve dose uniformity within {+-}10%. This technique is easy to implement and does not require complicated dosimetry and/or compensators.« less
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Solar corona electron density distribution
NASA Astrophysics Data System (ADS)
Esposito, P. B.; Edenhofer, P.; Lueneburg, E.
1980-07-01
The paper discusses the three and one-half months of single-frequency time delay data which were acquired from the Helios 2 spacecraft around the time of its solar occultation. The excess time delay due to integrated effect of free electrons along the signal's ray path could be separated and modeled following the determination of the spacecraft trajectory. An average solar corona and equatorial electron density profile during solar minimum were deduced from the time delay measurements acquired within 5-60 solar radii of the sun. As a point of reference at 10 solar radii from the sun, an average electron density was 4500 el/cu cm. However, an asymmetry was found in the electron density as the ray path moved from the west to east solar limb. This may be related to the fact that during entry into occultation the heliographic latitude of the ray path was about 6 deg, while during exit it was 7 deg. The Helios density model is compared with similar models deduced from different experimental techniques.
Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting
NASA Astrophysics Data System (ADS)
Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.
2018-04-01
Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.
Radar studies related to the earth resources program. [remote sensing programs
NASA Technical Reports Server (NTRS)
Holtzman, J.
1972-01-01
The radar systems research discussed is directed toward achieving successful application of radar to remote sensing problems in such areas as geology, hydrology, agriculture, geography, forestry, and oceanography. Topics discussed include imaging radar and evaluation of its modification, study of digital processing for synthetic aperture system, digital simulation of synthetic aperture system, averaging techniques studies, ultrasonic modeling of panchromatic system, panchromatic radar/radar spectrometer development, measuring octave-bandwidth response of selected targets, scatterometer system analysis, and a model Fresnel-zone processor for synthetic aperture imagery.
Six-axis orthodontic force and moment sensing system for dentist technique training.
Midorikawa, Yoshiyuki; Takemura, Hiroshi; Mizoguchi, Hiroshi; Soga, Kohei; Kamimura, Masao; Suga, Kazuhiro; Wei-Jen Lai; Kanno, Zuisei; Uo, Motohiro
2016-08-01
The purpose of this study is to develop a sensing system device that measures three-axis orthodontic forces and three-axis orthodontic moments for dentist training. The developed sensing system is composed of six-axis force sensors, action sticks, sliders, and tooth models. The developed system also simulates various types of tooth row shape patterns in orthodontic operations, and measures a 14 × 6 axis orthodontic force and moment from tooth models simultaneously. The average force and moment error per loaded axis were 2.06 % and 2.00 %, respectively.
A comparison of computer-assisted and manual wound size measurement.
Thawer, Habiba A; Houghton, Pamela E; Woodbury, M Gail; Keast, David; Campbell, Karen
2002-10-01
Accurate and precise wound measurements are a critical component of every wound assessment. To examine the reliability and validity of a new computerized technique for measuring human and animal wounds, chronic human wounds (N = 45) and surgical animal wounds (N = 38) were assessed using manual and computerized techniques. Using intraclass correlation coefficients, intrarater and interrater reliability of surface area measurements obtained using the computerized technique were compared to those obtained using acetate tracings and planimetry. A single measurement of surface area using either technique produced excellent intrarater and interrater reliability for both human and animal wounds, but the computerized technique was more precise than the manual technique for measuring the surface area of animal wounds. For both types of wounds and measurement techniques, intrarater and interrater reliability improved when the average of three repeated measurements was obtained. The precision of each technique with human wounds and the precision of the manual technique with animal wounds also improved when three repeated measurement results were averaged. Concurrent validity between the two techniques was excellent for human wounds but poor for the smaller animal wounds, regardless of whether single or the average of three repeated surface area measurements was used. The computerized technique permits reliable and valid assessment of the surface area of both human and animal wounds.
[Applying the clustering technique for characterising maintenance outsourcing].
Cruz, Antonio M; Usaquén-Perilla, Sandra P; Vanegas-Pabón, Nidia N; Lopera, Carolina
2010-06-01
Using clustering techniques for characterising companies providing health institutions with maintenance services. The study analysed seven pilot areas' equipment inventory (264 medical devices). Clustering techniques were applied using 26 variables. Response time (RT), operation duration (OD), availability and turnaround time (TAT) were amongst the most significant ones. Average biomedical equipment obsolescence value was 0.78. Four service provider clusters were identified: clusters 1 and 3 had better performance, lower TAT, RT and DR values (56 % of the providers coded O, L, C, B, I, S, H, F and G, had 1 to 4 day TAT values:
Nelson, Joshua D; McIff, Terence E; Moodie, Patrick G; Iverson, Jamey L; Horton, Greg A
2010-03-01
Internal fixation of the os calcis is often complicated by prolonged soft tissue management and posterior facet disruption. An ideal calcaneal construct would include minimal hardware prominence, sturdy posterior facet fixation and nominal soft tissue disruption. The purpose of this study was to develop such a construct and provide a biomechanical analysis comparing our technique to a standard internal fixation technique. Twenty fresh-frozen cadaver calcanei were used to create a reproducible Sanders type-IIB calcaneal fracture pattern. One calcaneus of each pair was randomly selected to be fixed using our compressive headless screw technique. The contralateral matched calcaneus was fixed with a nonlocking calcaneal plate in a traditional fashion. Each calcaneus was cyclically loaded at a frequency of 1 Hz for 4000 cycles using an increasing force from 250 N to 1000 N. An Optotrak motion capturing system was used to detect relative motion of the three fracture fragments at eight different points along the fracture lines. Horizontal separation and vertical displacement at the fracture lines was recorded, as well as relative rotation at the primary fracture line. When the data were averaged, there was more horizontal displacement at the primary fracture line of the plate and screw construct compared to the headless screw construct. The headless screw construct also had less vertical displacement at the primary fracture line at every load. On average those fractures fixed with the headless screw technique had less rotation than those fixed with the side plate technique. A new headless screw technique for calcaneus fracture fixation was shown to provide stability as good as, or better than, a standard side plating technique under the axial loading conditions of our model. Although further testing is needed, the stability of the proposed technique is similar to that typically provided by intramedullary fixation. This fixation technique provides a biomechanically stable construct with the potential for a minimally invasive approach and improved post-operative soft tissue healing.
Kamimura, Emi; Tanaka, Shinpei; Takaba, Masayuki; Tachi, Keita; Baba, Kazuyoshi
2017-01-01
Purpose The aim of this study was to evaluate and compare the inter-operator reproducibility of three-dimensional (3D) images of teeth captured by a digital impression technique to a conventional impression technique in vivo. Materials and methods Twelve participants with complete natural dentition were included in this study. A digital impression of the mandibular molars of these participants was made by two operators with different levels of clinical experience, 3 or 16 years, using an intra-oral scanner (Lava COS, 3M ESPE). A silicone impression also was made by the same operators using the double mix impression technique (Imprint3, 3M ESPE). Stereolithography (STL) data were directly exported from the Lava COS system, while STL data of a plaster model made from silicone impression were captured by a three-dimensional (3D) laboratory scanner (D810, 3shape). The STL datasets recorded by two different operators were compared using 3D evaluation software and superimposed using the best-fit-algorithm method (least-squares method, PolyWorks, InnovMetric Software) for each impression technique. Inter-operator reproducibility as evaluated by average discrepancies of corresponding 3D data was compared between the two techniques (Wilcoxon signed-rank test). Results The visual inspection of superimposed datasets revealed that discrepancies between repeated digital impression were smaller than observed with silicone impression. Confirmation was forthcoming from statistical analysis revealing significantly smaller average inter-operator reproducibility using a digital impression technique (0.014± 0.02 mm) than when using a conventional impression technique (0.023 ± 0.01 mm). Conclusion The results of this in vivo study suggest that inter-operator reproducibility with a digital impression technique may be better than that of a conventional impression technique and is independent of the clinical experience of the operator. PMID:28636642
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
Speciation and isotopic exchangeability of nickel in soil solution.
Nolan, Annette L; Ma, Yibing; Lombi, Enzo; McLaughlin, Mike J
2009-01-01
Knowledge of trace metal speciation in soil pore waters is important in addressing metal bioavailability and risk assessment of contaminated soils. In this study, free Ni(2+) activities were determined in pore waters of long-term Ni-contaminated soils using a Donnan dialysis membrane technique. The pore water free Ni(2+) concentration as a percentage of total soluble Ni ranged from 21 to 80% (average 53%), and the average amount of Ni bound to dissolved organic matter estimated by Windermere Humic Aqueous Model VI was < or = 17%. These data indicate that complexed forms of Ni can constitute a significant fraction of total Ni in solution. Windermere Humic Aqueous Model VI provided reasonable estimates of free Ni(2+) fractions in comparison to the measured fractions (R(2) = 0.83 with a slope of 1.0). Also, the isotopically exchangeable pools (E value) of soil Ni were measured by an isotope dilution technique using water extraction, with and without resin purification, and 0.1 mol L(-1) CaCl(2) extraction, and the isotopic exchangeability of Ni species in soil water extracts was investigated. The concentrations of isotopically non-exchangeable Ni in water extracts were <9% of total water soluble Ni concentrations for all soils. The resin E values expressed as a percentage of the total Ni concentrations in soil showed that the labile Ni pool ranged from 0.9 to 32.4% (average 12.4%) of total soil Ni. Therefore the labile Ni pool in these well-equilibrated contaminated soils appears to be relatively small in relation to total Ni concentrations.
Surface transport processes in charged porous media
Gabitto, Jorge; Tsouris, Costas
2017-03-03
Surface transport processes are important in chemistry, colloidal sciences, engineering, biology, and geophysics. Natural or externally produced charges on surfaces create electrical double layers (EDLs) at the solid-liquid interface. The existence of the EDLs produces several complex processes including bulk and surface transport of ions. In this work, a model is presented to simulate bulk and transport processes in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations inmore » the limit of thin electrical double layers. Description of the EDL between the electrolyte solution and the charged wall is accomplished using the Gouy-Chapman-Stern (GCS) model. The surface transport terms enter into the average equations due to the use of boundary conditions for diffuse interfaces. Two extra surface transports terms appear in the closed average equations. One is a surface diffusion term equivalent to the transport process in non-charged porous media. The second surface transport term is a migration term unique to charged porous media. The effective bulk and transport parameters for isotropic porous media are calculated solving the corresponding closure problems.« less
Surface transport processes in charged porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabitto, Jorge; Tsouris, Costas
Surface transport processes are important in chemistry, colloidal sciences, engineering, biology, and geophysics. Natural or externally produced charges on surfaces create electrical double layers (EDLs) at the solid-liquid interface. The existence of the EDLs produces several complex processes including bulk and surface transport of ions. In this work, a model is presented to simulate bulk and transport processes in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations inmore » the limit of thin electrical double layers. Description of the EDL between the electrolyte solution and the charged wall is accomplished using the Gouy-Chapman-Stern (GCS) model. The surface transport terms enter into the average equations due to the use of boundary conditions for diffuse interfaces. Two extra surface transports terms appear in the closed average equations. One is a surface diffusion term equivalent to the transport process in non-charged porous media. The second surface transport term is a migration term unique to charged porous media. The effective bulk and transport parameters for isotropic porous media are calculated solving the corresponding closure problems.« less
Basic Features of Global Circulation in the Mesopause Lower Thermosphere Region
NASA Technical Reports Server (NTRS)
Portnyagin, Y. I.
1984-01-01
D1 and D2 techniques have been used and are being used for observations at stations located in the high, middle, and low latitudes of both hemispheres. The systematical and wind velocity measurements with these techniques make it possible to specify and to refine earlier mesopause-lower thermosphere circulation models. With this in view, an effort was made to obtain global long term average height-latitude sections of the wind field at 70 to 110 km using the analysis of long period D1 and D2 observations. Data from 26 meteor radar and 6 ionospheric stations were taken for analysis.
RANS Simulation of the Separated Flow over a Bump with Active Control
NASA Technical Reports Server (NTRS)
Iaccarino, Gianluca; Marongiu, Claudio; Catalano, Pietro; Amato, Marcello
2003-01-01
The objective of this paper is to investigate the accuracy of Reynolds-Averaged Navier- Stokes (RANS) techniques in predicting the effect of steady and unsteady flow control devices. This is part of a larger effort in applying numerical simulation tools to investigate of the performance of synthetic jets in high Reynolds number turbulent flows. RANS techniques have been successful in predicting isolated synthetic jets as reported by Kral et al. Nevertheless, due to the complex, and inherently unsteady nature of the interaction between the synthetic jet and the external boundary layer flow, it is not clear whether RANS models can represent the turbulence statistics correctly.
NASA Technical Reports Server (NTRS)
Wing, L. D.
1979-01-01
Simplified analytical techniques of sounding rocket programs are suggested as a means of bringing the cost of thermal analysis of the Get Away Special (GAS) payloads within acceptable bounds. Particular attention is given to two methods adapted from sounding rocket technology - a method in which the container and payload are assumed to be divided in half vertically by a thermal plane of symmetry, and a method which considers the container and its payload to be an analogous one-dimensional unit having the real or correct container top surface area for radiative heat transfer and a fictitious mass and geometry which model the average thermal effects.
Using Movies to Analyse Gene Circuit Dynamics in Single Cells
Locke, James CW; Elowitz, Michael B
2010-01-01
Preface Many bacterial systems rely on dynamic genetic circuits to control critical processes. A major goal of systems biology is to understand these behaviours in terms of individual genes and their interactions. However, traditional techniques based on population averages wash out critical dynamics that are either unsynchronized between cells or driven by fluctuations, or ‘noise,’ in cellular components. Recently, the combination of time-lapse microscopy, quantitative image analysis, and fluorescent protein reporters has enabled direct observation of multiple cellular components over time in individual cells. In conjunction with mathematical modelling, these techniques are now providing powerful insights into genetic circuit behaviour in diverse microbial systems. PMID:19369953
Cylinder-averaged histories of nitrogen oxide in a DI diesel with simulated turbocharging
NASA Astrophysics Data System (ADS)
Donahue, Ronald J.; Borman, Gary L.; Bower, Glenn R.
1994-10-01
An experimental study was conducted using the dumping technique (total cylinder sampling) to produce cylinder mass-averaged nitric oxide histories. Data were taken using a four stroke diesel research engine employing a quiescent chamber, high pressure direct injection fuel system, and simulated turbocharging. Two fuels were used to determine fuel cetane number effects. Two loads were run, one at an equivalence ratio of 0.5 and the other at a ratio of 0.3. The engine speed was held constant at 1500 rpm. Under the turbocharged and retarded timing conditions of this study, nitric oxide was produced up to the point of about 85% mass burned. Two different models were used to simulate the engine mn conditions: the phenomenological Hiroyasu spray-combustion model, and the three dimensional, U.W.-ERO modified KIVA-2 computational fluid dynamic code. Both of the models predicted the correct nitric oxide trend. Although the modified KIVA-2 combustion model using Zeldovich kinetics correctly predicted the shapes of the nitric oxide histories, it did not predict the exhaust concentrations without arbitrary adjustment based on experimental values.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
A cross-comparison of different techniques for modeling macro-level cyclist crashes.
Guo, Yanyong; Osama, Ahmed; Sayed, Tarek
2018-04-01
Despite the recognized benefits of cycling as a sustainable mode of transportation, cyclists are considered vulnerable road users and there are concerns about their safety. Therefore, it is essential to investigate the factors affecting cyclist safety. The goal of this study is to evaluate and compare different approaches of modeling macro-level cyclist safety as well as investigating factors that contribute to cyclist crashes using a comprehensive list of covariates. Data from 134 traffic analysis zones (TAZs) in the City of Vancouver were used to develop macro-level crash models (CM) incorporating variables related to actual traffic exposure, socio-economics, land use, built environment, and bike network. Four types of CMs were developed under a full Bayesian framework: Poisson lognormal model (PLN), random intercepts PLN model (RIPLN), random parameters PLN model (RPPLN), and spatial PLN model (SPLN). The SPLN model had the best goodness of fit, and the results highlighted the significant effects of spatial correlation. The models showed that the cyclist crashes were positively associated with bike and vehicle exposure measures, households, commercial area density, and signal density. On the other hand, negative associations were found between cyclist crashes and some bike network indicators such as average edge length, average zonal slope, and off-street bike links. Copyright © 2018 Elsevier Ltd. All rights reserved.
MMOD Protection and Degradation Effects for Thermal Control Systems
NASA Technical Reports Server (NTRS)
Christiansen, Eric
2014-01-01
Micrometeoroid and orbital debris (MMOD) environment overview Hypervelocity impact effects & MMOD shielding MMOD risk assessment process Requirements & protection techniques - ISS - Shuttle - Orion/Commercial Crew Vehicles MMOD effects on spacecraft systems & improving MMOD protection - Radiators Coatings - Thermal protection system (TPS) for atmospheric entry vehicles Coatings - Windows - Solar arrays - Solar array masts - EVA Handrails - Thermal Blankets Orbital Debris provided by JSC & is the predominate threat in low Earth orbit - ORDEM 3.0 is latest model (released December 2013) - http://orbitaldebris.jsc.nasa.gov/ - Man-made objects in orbit about Earth impacting up to 16 km/s average 9-10 km/s for ISS orbit - High-density debris (steel) is major issue Meteoroid model provided by MSFC - MEM-R2 is latest release - http://www.nasa.gov/offices/meo/home/index.html - Natural particles in orbit about sun Mg-silicates, Ni-Fe, others - Meteoroid environment (MEM): 11-72 km/s Average 22-23 km/s.
Almost sure convergence in quantum spin glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu
2015-12-15
Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less
2016-09-07
been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple
Downward longwave surface radiation from sun-synchronous satellite data - Validation of methodology
NASA Technical Reports Server (NTRS)
Darnell, W. L.; Gupta, S. K.; Staylor, W. F.
1986-01-01
An extensive study has been carried out to validate a satellite technique for estimating downward longwave radiation at the surface. The technique, mostly developed earlier, uses operational sun-synchronous satellite data and a radiative transfer model to provide the surface flux estimates. The satellite-derived fluxes were compared directly with corresponding ground-measured fluxes at four different sites in the United States for a common one-year period. This provided a study of seasonal variations as well as a diversity of meteorological conditions. Dome heating errors in the ground-measured fluxes were also investigated and were corrected prior to the comparisons. Comparison of the monthly averaged fluxes from the satellite and ground sources for all four sites for the entire year showed a correlation coefficient of 0.98 and a standard error of estimate of 10 W/sq m. A brief description of the technique is provided, and the results validating the technique are presented.
NASA Astrophysics Data System (ADS)
Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze
2014-11-01
In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang
2016-01-01
Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727
Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon
2014-04-15
For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation.
Measurement of Initial Conditions at Nozzle Exit of High Speed Jets
NASA Technical Reports Server (NTRS)
Panda, J.; Zaman, K. B. M. Q.; Seasholtz, R. G.
2004-01-01
The time averaged and unsteady density fields close to the nozzle exit (0.1 less than or = x/D less than or = 2, x: downstream distance, D: jet diameter) of unheated free jets at Mach numbers of 0.95, 1.4, and 1.8 were measured using a molecular Rayleigh scattering based technique. The initial thickness of shear layer and its linear growth rate were determined from time-averaged density survey and a modeling process, which utilized the Crocco-Busemann equation to relate density profiles to velocity profiles. The model also corrected for the smearing effect caused by a relatively long probe length in the measured density data. The calculated shear layer thickness was further verified from a limited hot-wire measurement. Density fluctuations spectra, measured using a two-Photomultiplier-tube technique, were used to determine evolution of turbulent fluctuations in various Strouhal frequency bands. For this purpose spectra were obtained from a large number of points inside the flow; and at every axial station spectral data from all radial positions were integrated. The radially-integrated fluctuation data show an exponential growth with downstream distance and an eventual saturation in all Strouhal frequency bands. The initial level of density fluctuations was calculated by extrapolation to nozzle exit.
Fedorová, P; Srnec, R; Pěnčík, J; Dvořák, M; Krbec, M; Nečas, A
2015-01-01
PURPOSE OF THE STUDY Recent trends in the experimental surgical management of a partial anterior cruciate ligament (ACL) rupture in animals show repair of an ACL lesion using novel biomaterials both for biomechanical reinforcement of a partially unstable knee and as suitable scaffolds for bone marrow stem cell therapy in a partial ACL tear. The study deals with mechanical testing of the newly developed ultra-high-molecular-weight polyethylene (UHMWPE) biomaterial anchored to bone with Hexalon biodegradable ACL/PCL screws, as a new possibility of intra-articular reinforcement of a partial ACL tear. MATERIAL AND METHODS Two groups of ex vivo pig knee models were prepared and tested as follows: the model of an ACL tear stabilised with UHMWPE biomaterial using a Hexalon ACL/PCL screw (group 1; n = 10) and the model of an ACL tear stabilised with the traditional, and in veterinary medicine used, extracapsular technique involving a monofilament nylon fibre, a clamp and a Securos bone anchor (group 2; n = 11). The models were loaded at a standing angle of 100° and the maximum load (N) and shift (mm) values were recorded. RESULTS In group 1 the average maximal peak force was 167.6 ± 21.7 N and the shift was on average 19.0 ± 4.0 mm. In all 10 specimens, the maximum load made the UHMWPE implant break close to its fixation to the femur but the construct/fixation never failed at the site where the material was anchored to the bone. In group 2, the average maximal peak force was 207.3 ± 49.2 N and the shift was on average 24.1 ± 9.5 mm. The Securos stabilisation failed by pullout of the anchor from the femoral bone in nine out of 11 cases; the monofilament fibre ruptured in two cases. CONCLUSIONS It can be concluded that a UHMWPE substitute used in ex-vivo pig knee models has mechanical properties comparable with clinically used extracapsular Securos stabilisation and, because of its potential to carry stem cells and bioactive substances, it can meet the requirements for an implant appropriate to the unique technique of protecting a partial ACL tear. In addition, it has no critical point of ACL substitute failure at the site of its anchoring to the bone (compared to the previously used PET/PCL substitute). Key words: knee stabilisation, stifle surgery, ultra-high-molecular-weight polyethylene, UHMWPE, nylon monofilament thread, biodegradable screw, bone anchor.
Physiological correlates of mental workload
NASA Technical Reports Server (NTRS)
Zacharias, G. L.
1980-01-01
A literature review was conducted to assess the basis of and techniques for physiological assessment of mental workload. The study findings reviewed had shortcomings involving one or more of the following basic problems: (1) physiologic arousal can be easily driven by nonworkload factors, confounding any proposed metric; (2) the profound absence of underlying physiologic models has promulgated a multiplicity of seemingly arbitrary signal processing techniques; (3) the unspecified multidimensional nature of physiological "state" has given rise to a broad spectrum of competing noncommensurate metrics; and (4) the lack of an adequate definition of workload compels physiologic correlations to suffer either from the vagueness of implicit workload measures or from the variance of explicit subjective assessments. Using specific studies as examples, two basic signal processing/data reduction techniques in current use, time and ensemble averaging are discussed.
A comparative study of two prediction models for brain tumor progression
NASA Astrophysics Data System (ADS)
Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang
2015-03-01
MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were calculated as average over the four patients. Experimental results show that both the manifold learning and deep neural network models produced better results compared to using raw data and principle component analysis (PCA), and the deep learning model is a better method than manifold learning on this data set. The averaged sensitivity and specificity by deep learning are comparable with these by the manifold learning approach while its precision is considerably higher. This means that the predicted abnormal points by deep learning are more likely to correspond to the actual progression region.
NASA Astrophysics Data System (ADS)
Sitohang, Yosep Oktavianus; Darmawan, Gumgum
2017-08-01
This research attempts to compare between two forecasting models in time series analysis for predicting the sales volume of motorcycle in Indonesia. The first forecasting model used in this paper is Autoregressive Fractionally Integrated Moving Average (ARFIMA). ARFIMA can handle non-stationary data and has a better performance than ARIMA in forecasting accuracy on long memory data. This is because the fractional difference parameter can explain correlation structure in data that has short memory, long memory, and even both structures simultaneously. The second forecasting model is Singular spectrum analysis (SSA). The advantage of the technique is that it is able to decompose time series data into the classic components i.e. trend, cyclical, seasonal and noise components. This makes the forecasting accuracy of this technique significantly better. Furthermore, SSA is a model-free technique, so it is likely to have a very wide range in its application. Selection of the best model is based on the value of the lowest MAPE. Based on the calculation, it is obtained the best model for ARFIMA is ARFIMA (3, d = 0, 63, 0) with MAPE value of 22.95 percent. For SSA with a window length of 53 and 4 group of reconstructed data, resulting MAPE value of 13.57 percent. Based on these results it is concluded that SSA produces better forecasting accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooker, A.; Gonder, J.; Lopp, S.
The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution ofmore » importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.« less
Weather forecasting based on hybrid neural model
NASA Astrophysics Data System (ADS)
Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.
2017-11-01
Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María
2015-01-01
The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.
Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad
2012-01-01
The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software's. In quality - effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial - efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different.
Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad
2012-01-01
Background: The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. Methods: In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software’s. Results: In quality – effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial – efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. Conclusion: This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different. PMID:24688942
Towards the Irving-Kirkwood limit of the mechanical stress tensor
NASA Astrophysics Data System (ADS)
Smith, E. R.; Heyes, D. M.; Dini, D.
2017-06-01
The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V =ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ =1.0 , a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.
Towards the Irving-Kirkwood limit of the mechanical stress tensor.
Smith, E R; Heyes, D M; Dini, D
2017-06-14
The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ 3 . Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.
Towards the Irving-Kirkwood limit of the mechanical stress tensor
Heyes, D. M.; Dini, D.
2017-01-01
The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems. PMID:29166053
Performance of preproduction model cesium beam frequency standards for spacecraft applications
NASA Technical Reports Server (NTRS)
Levine, M. W.
1978-01-01
A cesium beam frequency standards for spaceflight application on Navigation Development Satellites was designed and fabricated and preliminary testing was completed. The cesium standard evolved from an earlier prototype model launched aboard NTS-2 and the engineering development model to be launched aboard NTS satellites during 1979. A number of design innovations, including a hybrid analog/digital integrator and the replacement of analog filters and phase detectors by clocked digital sampling techniques are discussed. Thermal and thermal-vacuum testing was concluded and test data are presented. Stability data for 10 to 10,000 seconds averaging interval, measured under laboratory conditions, are shown.
Stock, Eileen M.; Kimbrel, Nathan A.; Meyer, Eric C.; Copeland, Laurel A.; Monte, Ralph; Zeber, John E.; Gulliver, Suzy Bird; Morissette, Sandra B.
2016-01-01
Many Veterans from the conflicts in Iraq and Afghanistan return home with physical and psychological impairments that impact their ability to enjoy normal life activities and diminish their quality of life (QoL). The present research aimed to identify predictors of QoL over an 8-month period using Bayesian model averaging (BMA), which is a statistical technique useful for maximizing power with smaller sample sizes. A sample of 117 Iraq and Afghanistan Veterans receiving care in a southwestern healthcare system was recruited, and BMA examined the impact of key demographics (e.g., age, gender), diagnoses (e.g., depression), and treatment modalities (e.g., individual therapy, medication) on QoL over time. Multiple imputation based on Gibbs sampling was employed for incomplete data (6.4% missingness). Average follow-up QoL scores were significantly lower than at baseline (73.2 initial vs 69.5 4-month and 68.3 8-month). Employment was associated with increased QoL during each follow-up, while posttraumatic stress disorder and black race were inversely related. Additionally, predictive models indicated that depression, income, treatment for a medical condition, and group psychotherapy were strong negative predictors of 4-month QoL but not 8-month QoL. PMID:24942672
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2005-01-01
Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
Fine particle receptor modeling in the atmosphere of Mexico City.
Vega, Elizabeth; Lowenthal, Douglas; Ruiz, Hugo; Reyes, Elizabeth; Watson, John G; Chow, Judith C; Viana, Mar; Querol, Xavier; Alastuey, Andrés
2009-12-01
Source apportionment analyses were carried out by means of receptor modeling techniques to determine the contribution of major fine particulate matter (PM2.5) sources found at six sites in Mexico City. Thirty-six source profiles were determined within Mexico City to establish the fingerprints of particulate matter sources. Additionally, the profiles under the same source category were averaged using cluster analysis and the fingerprints of 10 sources were included. Before application of the chemical mass balance (CMB), several tests were carried out to determine the best combination of source profiles and species used for the fitting. CMB results showed significant spatial variations in source contributions among the six sites that are influenced by local soil types and land use. On average, 24-hr PM2.5 concentrations were dominated by mobile source emissions (45%), followed by secondary inorganic aerosols (16%) and geological material (17%). Industrial emissions representing oil combustion and incineration contributed less than 5%, and their contribution was higher at the industrial areas of Tlalnepantla (11%) and Xalostoc (8%). Other sources such as cooking, biomass burning, and oil fuel combustion were identified at lower levels. A second receptor model (principal component analysis, [PCA]) was subsequently applied to three of the monitoring sites for comparison purposes. Although differences were obtained between source contributions, results evidence the advantages of the combined use of different receptor modeling techniques for source apportionment, given the complementary nature of their results. Further research is needed in this direction to reach a better agreement between the estimated source contributions to the particulate matter mass.
KAMINSKI, GEORGE A.; STERN, HARRY A.; BERNE, B. J.; FRIESNER, RICHARD A.; CAO, YIXIANG X.; MURPHY, ROBERT B.; ZHOU, RUHONG; HALGREN, THOMAS A.
2014-01-01
We present results of developing a methodology suitable for producing molecular mechanics force fields with explicit treatment of electrostatic polarization for proteins and other molecular system of biological interest. The technique allows simulation of realistic-size systems. Employing high-level ab initio data as a target for fitting allows us to avoid the problem of the lack of detailed experimental data. Using the fast and reliable quantum mechanical methods supplies robust fitting data for the resulting parameter sets. As a result, gas-phase many-body effects for dipeptides are captured within the average RMSD of 0.22 kcal/mol from their ab initio values, and conformational energies for the di- and tetrapeptides are reproduced within the average RMSD of 0.43 kcal/mol from their quantum mechanical counterparts. The latter is achieved in part because of application of a novel torsional fitting technique recently developed in our group, which has already been used to greatly improve accuracy of the peptide conformational equilibrium prediction with the OPLS-AA force field.1 Finally, we have employed the newly developed first-generation model in computing gas-phase conformations of real proteins, as well as in molecular dynamics studies of the systems. The results show that, although the overall accuracy is no better than what can be achieved with a fixed-charges model, the methodology produces robust results, permits reasonably low computational cost, and avoids other computational problems typical for polarizable force fields. It can be considered as a solid basis for building a more accurate and complete second-generation model. PMID:12395421
NASA Astrophysics Data System (ADS)
Taravat, A.; Del Frate, F.
2013-09-01
As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
Tubocurarine and pancuronium: a pharmacokinetic view.
Shanks, C A; Somogyi, A A; Ramzan, M I; Triggs, E J
1980-02-01
This review is an attempt to bring together the pharmacokinetic data on d-tubocurarine and pancuronium with clinical observations on relaxant dosage and effect. The modelling techniques used here represent an oversimplification of the relationships between relaxant plasma concentration and response as they do not predict either the time of onset of paralysis or its peak intensity. However, they do enable calculation of a bolus dose of relaxant required to achieve a particular intensity of paralysis for the average patient once pseudo-distribution equilibrium has been achieved. This has been further extended to predict the cumulation of the relaxants with subsequent dosage in average patients. Suggested regimens incorporating bolus and infusion doses of the relaxants to achieve continuous neuromuscular blockade have been calculated also. Averaged pharmacokinetic parameters derived from patients with renal or hepatic dysfunction have been used to predict the likely duration and intensities of paralysis for the relaxants.
Finite Element Modeling of the NASA Langley Aluminum Testbed Cylinder
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Pritchard, Joselyn I.; Buehrle, Ralph D.; Pappa, Richard S.
2002-01-01
The NASA Langley Aluminum Testbed Cylinder (ATC) was designed to serve as a universal structure for evaluating structural acoustic codes, modeling techniques and optimization methods used in the prediction of aircraft interior noise. Finite element models were developed for the components of the ATC based on the geometric, structural and material properties of the physical test structure. Numerically predicted modal frequencies for the longitudinal stringer, ring frame and dome component models, and six assembled ATC configurations were compared with experimental modal survey data. The finite element models were updated and refined, using physical parameters, to increase correlation with the measured modal data. Excellent agreement, within an average 1.5% to 2.9%, was obtained between the predicted and measured modal frequencies of the stringer, frame and dome components. The predictions for the modal frequencies of the assembled component Configurations I through V were within an average 2.9% and 9.1%. Finite element modal analyses were performed for comparison with 3 psi and 6 psi internal pressurization conditions in Configuration VI. The modal frequencies were predicted by applying differential stiffness to the elements with pressure loading and creating reduced matrices for beam elements with offsets inside external superelements. The average disagreement between the measured and predicted differences for the 0 psi and 6 psi internal pressure conditions was less than 0.5%. Comparably good agreement was obtained for the differences between the 0 psi and 3 psi measured and predicted internal pressure conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, David, E-mail: dhthomas@mednet.ucla.edu; Lamb, James; White, Benjamin
2014-05-01
Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. Themore » tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.« less
Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel
2014-05-01
To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact-free images at a patient dose similar to or less than current 4D-CT techniques. Copyright © 2014 Elsevier Inc. All rights reserved.
Measurement of bronchial blood flow in the sheep by video dilution technique.
Link, D P; Parsons, G H; Lantz, B M; Gunther, R A; Green, J F; Cross, C E
1985-01-01
Bronchial blood flow was determined in five adult anaesthetised sheep by the video dilution technique. This is a new fluoroscopic technique for measuring blood flow that requires only arterial catheterisation. Catheters were placed into the broncho-oesophageal artery and ascending aorta from the femoral arteries for contrast injections and subsequent videotape recording. The technique yields bronchial blood flow as a percentage of cardiac output. The average bronchial artery blood flow was 0.6% (SD 0.20%) of cardiac output. In one sheep histamine (90 micrograms) injected directly into the bronchial artery increased bronchial blood flow by a factor of 6 and histamine (90 micrograms) plus methacholine (4.5 micrograms) augmented flow by a factor of 7.5 while leaving cardiac output unchanged. This study confirms the high degree of reactivity of the bronchial circulation and demonstrates the feasibility of using the video dilution technique to investigate the determinants of total bronchial artery blood flow in a stable animal model avoiding thoracotomy. Images PMID:3883564
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
Factors influencing suspended solids concentrations in activated sludge settling tanks.
Kim, Y; Pipes, W O
1999-05-31
A significant fraction of the total mass of sludge in an activated sludge process may be in the settling tanks if the sludge has a high sludge volume index (SVI) or when a hydraulic overload occurs during a rainstorm. Under those conditions, an accurate estimate of the amount of sludge in the settling tanks is needed in order to calculate the mean cell residence time or to determine the capacity of the settling tanks to store sludge. Determination of the amount of sludge in the settling tanks requires estimation of the average concentration of suspended solids in the layer of sludge (XSB) in the bottom of the settling tanks. A widely used reference recommends averaging the concentrations of suspended solids in the mixed liquor (X) and in the underflow (Xu) from the settling tanks (XSB=0. 5{X+Xu}). This method does not take into consideration other pertinent information available to an operator. This is a report of a field study which had the objective of developing a more accurate method for estimation of the XSB in the bottom of the settling tanks. By correlation analysis, it was found that only 44% of the variation in the measured XSB is related to sum of X and Xu. XSB is also influenced by the SVI, the zone settling velocity at X and the overflow and underflow rates of the settling tanks. The method of averaging X and Xu tends to overestimate the XSB. A new empirical estimation technique for XSB was developed. The estimation technique uses dimensionless ratios; i.e., the ratio of XSB to Xu, the ratio of the overflow rate to the sum of the underflow rate and the initial settling velocity of the mixed liquor and sludge compaction expressed as a ratio (dimensionless SVI). The empirical model is compared with the method of averaging X and Xu for the entire range of sludge depths in the settling tanks and for SVI values between 100 and 300 ml/g. Since the empirical model uses dimensionless ratios, the regression parameters are also dimensionless and the model can be readily adopted for other activated sludge processes. A simplified version of the empirical model provides an estimation of XSB as a function of X, Xu and SVf and can be used by an operator when flow conditions are normal. Copyright 1999 Elsevier Science B.V.
Cell population modelling of yeast glycolytic oscillations.
Henson, Michael A; Müller, Dirk; Reuss, Matthias
2002-01-01
We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713
Fuzzy neural network technique for system state forecasting.
Li, Dezhi; Wang, Wilson; Ismail, Fathy
2013-10-01
In many system state forecasting applications, the prediction is performed based on multiple datasets, each corresponding to a distinct system condition. The traditional methods dealing with multiple datasets (e.g., vector autoregressive moving average models and neural networks) have some shortcomings, such as limited modeling capability and opaque reasoning operations. To tackle these problems, a novel fuzzy neural network (FNN) is proposed in this paper to effectively extract information from multiple datasets, so as to improve forecasting accuracy. The proposed predictor consists of both autoregressive (AR) nodes modeling and nonlinear nodes modeling; AR models/nodes are used to capture the linear correlation of the datasets, and the nonlinear correlation of the datasets are modeled with nonlinear neuron nodes. A novel particle swarm technique [i.e., Laplace particle swarm (LPS) method] is proposed to facilitate parameters estimation of the predictor and improve modeling accuracy. The effectiveness of the developed FNN predictor and the associated LPS method is verified by a series of tests related to Mackey-Glass data forecast, exchange rate data prediction, and gear system prognosis. Test results show that the developed FNN predictor and the LPS method can capture the dynamics of multiple datasets effectively and track system characteristics accurately.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Acevedo, Walter; Fallah, Bijan; Reich, Sebastian; Cubasch, Ulrich
2017-05-01
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in the model. This result might help the dendrochronology community to optimize their sampling efforts.
Du, Fengzhou; Li, Binghang; Yin, Ningbei; Cao, Yilin; Wang, Yongqian
2017-03-01
Knowing the volume of a graft is essential in repairing alveolar bone defects. This study investigates the 2 advanced preoperative volume measurement methods: three-dimensional (3D) printing and computer-aided engineering (CAE). Ten unilateral alveolar cleft patients were enrolled in this study. Their computed tomographic data were sent to 3D printing and CAE software. A simulated graft was used on the 3D-printed model, and the graft volume was measured by water displacement. The volume calculated by CAE software used mirror-reverses technique. The authors compared the actual volumes of the simulated grafts with the CAE software-derived volumes. The average volume of the simulated bone grafts by 3D-printed models was 1.52 mL, higher than the mean volume of 1.47 calculated by CAE software. The difference between the 2 volumes was from -0.18 to 0.42 mL. The paired Student t test showed no statistically significant difference between the volumes derived from the 2 methods. This study demonstrated that the mirror-reversed technique by CAE software is as accurate as the simulated operation on 3D-printed models in unilateral alveolar cleft patients. These findings further validate the use of 3D printing and CAE technique in alveolar defect repairing.
A non-asymptotic model of dynamics of honeycomb lattice-type plates
NASA Astrophysics Data System (ADS)
Cielecka, Iwona; Jędrysiak, Jarosław
2006-09-01
Lightweight structures, consisted of special composite material systems like sandwich plates, are often used in aerospace or naval engineering. In composite sandwich plates, the intermediate core is usually made of cellular structures, e.g. honeycomb micro-frames, reinforcing static and dynamic properties of these plates. Here, a new non-asymptotic continuum model of honeycomb lattice-type plates is shown and applied to the analysis of dynamic problems. The general formulation of the model for periodic lattice-type plates of an arbitrary lay-out was presented by Cielecka and Jędrysiak [Journal of Theoretical and Applied Mechanics 40 (2002) 23-46]. This model, partly based on the tolerance averaging method developed for periodic composite solids by Woźniak and Wierzbicki [Averaging techniques in thermomechanics of composite solids, Wydawnictwo Politechniki Częstochowskiej, Częstochowa, 2000], takes into account the effect of the length microstructure size on the dynamic plate behaviour. The shown method leads to the model equations describing the above effect for honeycomb lattice-type plates. These equations have the form similar to equations for isotropic cases. The dynamic analysis of such plates exemplifies this effect, which is significant and cannot be neglected. The physical correctness of the obtained results is also discussed.
NASA Astrophysics Data System (ADS)
Khaki, M.; Forootan, E.; Kuhn, M.; Awange, J.; van Dijk, A. I. J. M.; Schumacher, M.; Sharifi, M. A.
2018-04-01
Groundwater depletion, due to both unsustainable water use and a decrease in precipitation, has been reported in many parts of Iran. In order to analyze these changes during the recent decade, in this study, we assimilate Terrestrial Water Storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) into the World-Wide Water Resources Assessment (W3RA) model. This assimilation improves model derived water storage simulations by introducing missing trends and correcting the amplitude and phase of seasonal water storage variations. The Ensemble Square-Root Filter (EnSRF) technique is applied, which showed stable performance in propagating errors during the assimilation period (2002-2012). Our focus is on sub-surface water storage changes including groundwater and soil moisture variations within six major drainage divisions covering the whole Iran including its eastern part (East), Caspian Sea, Centre, Sarakhs, Persian Gulf and Oman Sea, and Lake Urmia. Results indicate an average of -8.9 mm/year groundwater reduction within Iran during the period 2002 to 2012. A similar decrease is also observed in soil moisture storage especially after 2005. We further apply the canonical correlation analysis (CCA) technique to relate sub-surface water storage changes to climate (e.g., precipitation) and anthropogenic (e.g., farming) impacts. Results indicate an average correlation of 0.81 between rainfall and groundwater variations and also a large impact of anthropogenic activities (mainly for irrigations) on Iran's water storage depletions.
Tone and Broadband Noise Separation from Acoustic Data of a Scale-Model Counter-Rotating Open Rotor
NASA Technical Reports Server (NTRS)
Sree, David; Stephens, David B.
2014-01-01
Renewed interest in contra-rotating open rotor technology for aircraft propulsion application has prompted the development of advanced diagnostic tools for better design and improved acoustical performance. In particular, the determination of tonal and broadband components of open rotor acoustic spectra is essential for properly assessing the noise control parameters and also for validating the open rotor noise simulation codes. The technique of phase averaging has been employed to separate the tone and broadband components from a single rotor, but this method does not work for the two-shaft contra-rotating open rotor. A new signal processing technique was recently developed to process the contra-rotating open rotor acoustic data. The technique was first tested using acoustic data taken of a hobby aircraft open rotor propeller, and reported previously. The intent of the present work is to verify and validate the applicability of the new technique to a realistic one-fifth scale open rotor model which has 12 forward and 10 aft contra-rotating blades operating at realistic forward flight Mach numbers and tip speeds. The results and discussions of that study are presented in this paper.
Tone and Broadband Noise Separation from Acoustic Data of a Scale-Model Contra-Rotating Open Rotor
NASA Technical Reports Server (NTRS)
Sree, Dave; Stephens, David B.
2014-01-01
Renewed interest in contra-rotating open rotor technology for aircraft propulsion application has prompted the development of advanced diagnostic tools for better design and improved acoustical performance. In particular, the determination of tonal and broadband components of open rotor acoustic spectra is essential for properly assessing the noise control parameters and also for validating the open rotor noise simulation codes. The technique of phase averaging has been employed to separate the tone and broadband components from a single rotor, but this method does not work for the two-shaft contra-rotating open rotor. A new signal processing technique was recently developed to process the contra-rotating open rotor acoustic data. The technique was first tested using acoustic data taken of a hobby aircraft open rotor propeller, and reported previously. The intent of the present work is to verify and validate the applicability of the new technique to a realistic one-fifth scale open rotor model which has 12 forward and 10 aft contra-rotating blades operating at realistic forward flight Mach numbers and tip speeds. The results and discussions of that study are presented in this paper.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Transition zone structure beneath Ethiopia from 3-D fast marching pseudo-migration stacking
NASA Astrophysics Data System (ADS)
Benoit, M. H.; Lopez, A.; Levin, V.
2008-12-01
Several models for the origin of the Afar hotspot have been put forth over the last decade, but much ambiguity remains as to whether the hotspot tectonism found there is due to a shallow or deeply seated feature. Additionally, there has been much debate as to whether the hotspot owes its existence to a 'classic' mantle plume feature or if it is part of the African Superplume complex. To further understand the origin of the hotspot, we employ a new receiver function stacking method that incorporates a fast-marching three- dimensional ray tracing algorithm to improve upon existing studies of the mantle transition zone structure. Using teleseismic data from the Ethiopia Broadband Seismic Experiment and the EAGLE (Ethiopia Afar Grand Lithospheric Experiment) experiment, we stack receiver functions using a three-dimensional pseudo- migration technique to examine topography on the 410 and 660 km discontinuities. Previous methods of receiver function pseudo-migration incorporated ray tracing methods that were not able to ray trace through highly complicated 3-D structure, or the ray tracing techniques only produced 3-D time perturbations associated 1-D rays in a 3-D velocity medium. These previous techniques yielded confusing and incomplete results for when applied to the exceedingly complicated mantle structure beneath Ethiopia. Indeed, comparisons of the 1-D versus 3-D ray tracing techniques show that the 1-D technique mislocated structure laterally in the mantle by over 100 km. Preliminary results using our new technique show a shallower then average 410 km discontinuity and a deeper than average 660 km discontinuity over much of the region, suggested that the hotspot has a deep seated origin.
Chemical equilibrium modeling of organic acids, pH, aluminum, and iron in Swedish surface waters.
Sjöstedt, Carin S; Gustafsson, Jon Petter; Köhler, Stephan J
2010-11-15
A consistent chemical equilibrium model that calculates pH from charge balance constraints and aluminum and iron speciation in the presence of natural organic matter is presented. The model requires input data for total aluminum, iron, organic carbon, fluoride, sulfate, and charge balance ANC. The model is calibrated to pH measurements (n = 322) by adjusting the fraction of active organic matter only, which results in an error of pH prediction on average below 0.2 pH units. The small systematic discrepancy between the analytical results for the monomeric aluminum fractionation and the model results is corrected for separately for two different fractionation techniques (n = 499) and validated on a large number (n = 3419) of geographically widely spread samples all over Sweden. The resulting average error for inorganic monomeric aluminum is around 1 µM. In its present form the model is the first internally consistent modeling approach for Sweden and may now be used as a tool for environmental quality management. Soil gibbsite with a log *Ks of 8.29 at 25°C together with a pH dependent loading function that uses molar Al/C ratios describes the amount of aluminum in solution in the presence of organic matter if the pH is roughly above 6.0.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
NASA Astrophysics Data System (ADS)
Stumpf, Felix; Goebes, Philipp; Schmidt, Karsten; Schindewolf, Marcus; Schönbrodt-Stitt, Sarah; Wadoux, Alexandre; Xiang, Wei; Scholten, Thomas
2017-04-01
Soil erosion by water outlines a major threat to the Three Gorges Reservoir Area in China. A detailed assessment of soil conservation measures requires a tool that spatially identifies sediment reallocations due to rainfall-runoff events in catchments. We applied EROSION 3D as a physically based soil erosion and deposition model in a small mountainous catchment. Generally, we aim to provide a methodological frame that facilitates the model parametrization in a data scarce environment and to identify sediment sources and deposits. We used digital soil mapping techniques to generate spatially distributed soil property information for parametrization. For model calibration and validation, we continuously monitored the catchment on rainfall, runoff and sediment yield for a period of 12 months. The model performed well for large events (sediment yield>1 Mg) with an averaged individual model error of 7.5%, while small events showed an average error of 36.2%. We focused on the large events to evaluate reallocation patterns. Erosion occurred in 11.1% of the study area with an average erosion rate of 49.9Mgha 1. Erosion mainly occurred on crop rotation areas with a spatial proportion of 69.2% for 'corn-rapeseed' and 69.1% for 'potato-cabbage'. Deposition occurred on 11.0%. Forested areas (9.7%), infrastructure (41.0%), cropland (corn-rapeseed: 13.6%, potatocabbage: 11.3%) and grassland (18.4%) were affected by deposition. Because the vast majority of annual sediment yields (80.3%) were associated to a few large erosive events, the modelling approach provides a useful tool to spatially assess soil erosion control and conservation measures.
Lee, Seung-Jong; Kim, Euiseong
2012-08-01
The maintenance of the healthy periodontal ligament cells of the root surface of donor tooth and intimate surface contact between the donor tooth and the recipient bone are the key factors for successful tooth transplantation. In order to achieve these purposes, a duplicated donor tooth model can be utilized to reduce the extra-oral time using the computer-aided rapid prototyping (CARP) technique. Briefly, a three-dimensional digital imaging and communication in medicine (DICOM) image with the real dimensions of the donor tooth was obtained from a computed tomography (CT), and a life-sized resin tooth model was fabricated. Dimensional errors between real tooth, 3D CT image model and CARP model were calculated. And extra-oral time was recorded during the autotransplantation of the teeth. The average extra-oral time was 7 min 25 sec with the range of immediate to 25 min in cases which extra-oral root canal treatments were not performed while it was 9 min 15 sec when extra-oral root canal treatments were performed. The average radiographic distance between the root surface and the alveolar bone was 1.17 mm and 1.35 mm at mesial cervix and apex; they were 0.98 mm and 1.26 mm at the distal cervix and apex. When the dimensional errors between real tooth, 3D CT image model and CARP model were measured in cadavers, the average of absolute error was 0.291 mm between real teeth and CARP model. These data indicate that CARP may be of value in minimizing the extra-oral time and the gap between the donor tooth and the recipient alveolar bone in tooth transplantation.
A comparative study of surface waves inversion techniques at strong motion recording sites in Greece
Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.
2015-01-01
Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.
ERIC Educational Resources Information Center
Pangaribuan, Tagor; Manik, Sondang
2018-01-01
This research held at SMA HKBP 1 Tarutung North Sumatra on the research result of test XI[superscript 2] and XI[superscript 2] students, after they got treatment in teaching writing in recount text by using buzz group and clustering technique. The average score (X) was 67.7 and the total score buzz group the average score (X) was 77.2 and in…
Improving national-scale invasion maps: Tamarisk in the western United States
Jarnevich, C.S.; Evangelista, P.; Stohlgren, T.J.; Morisette, J.
2011-01-01
New invasions, better field data, and novel spatial-modeling techniques often drive the need to revisit previous maps and models of invasive species. Such is the case with the at least 10 species of Tamarix, which are invading riparian systems in the western United States and expanding their range throughout North America. In 2006, we developed a National Tamarisk Map by using a compilation of presence and absence locations with remotely sensed data and statistical modeling techniques. Since the publication of that work, our database of Tamarix distributions has grown significantly. Using the updated database of species occurrence, new predictor variables, and the maximum entropy (Maxent) model, we have revised our potential Tamarix distribution map for the western United States. Distance-to-water was the strongest predictor in the model (58.1%), while mean temperature of the warmest quarter was the second best predictor (18.4%). Model validation, averaged from 25 model iterations, indicated that our analysis had strong predictive performance (AUC = 0.93) and that the extent of Tamarix distributions is much greater than previously thought. The southwestern United States had the greatest suitable habitat, and this result differed from the 2006 model. Our work highlights the utility of iterative modeling for invasive species habitat modeling as new information becomes available. ?? 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
Generation of linear dynamic models from a digital nonlinear simulation
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.
1979-01-01
The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.
2014-01-01
Background Today it is unclear which technique for delivery of an additional boost after whole breast radiotherapy for breast conserved patients should be state of the art. We present a dosimetric comparison of different non-invasive treatment techniques for additional boost delivery. Methods For 10 different tumor bed localizations, 7 different non-invasive treatment plans were made. Dosimetric comparison of PTV-coverage and dose to organs at risk was performed. Results The Vero system achieved an excellent PTV-coverage and at the same time could minimize the dose to the organs at risk with an average near-maximum-dose (D2) to the heart of 0.9 Gy and the average volume of ipsilateral lung receiving 5 Gy (V5) of 1.5%. The TomoTherapy modalities delivered an average D2 to the heart of 0.9 Gy for the rotational and of 2.3 Gy for the static modality and an average V5 to the ipsilateral lung of 7.3% and 2.9% respectively. A rotational technique offers an adequate conformity at the cost of more low dose spread and a larger build-up area. In most cases a 2-field technique showed acceptable PTV-coverage, but a bad conformity. Electrons often delivered a worse PTV-coverage than photons, with the planning requirements achieved only in 2 patients and with an average D2 to the heart of 2.8 Gy and an average V5 to the ipsilateral lung of 5.8%. Conclusions We present advices which can be used as guidelines for the selection of the best individualized treatment. PMID:24467916
NASA Astrophysics Data System (ADS)
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell. Consistently with the two-region model working hypotheses, we subdivide the pore space into two volumes, which we select according to the features of the local micro-scale velocity field. Assuming separation of the scales, the mathematical development associated with the averaging method in the two volumes leads to a generalized two-equation model. The final (upscaled) formulation includes the standard first order mass exchange term together with additional terms, which we discuss. Our developments allow to identify the assumptions which are usually implicitly embedded in the usual adoption of a two region mobile-mobile model. All macro-scale properties introduced in this model can be determined explicitly from the pore-scale geometry and hydrodynamics through the solution of a set of closure equations. We pursue here an unsteady closure of the problem, leading to the occurrence of nonlocal (in time) terms in the upscaled system of equations. We provide the solution of the closure problems for a simple application documenting the time dependent and the asymptotic behavior of the system.
Precision orbit raising trajectories. [solar electric propulsion orbital transfer program
NASA Technical Reports Server (NTRS)
Flanagan, P. F.; Horsewood, J. L.; Pines, S.
1975-01-01
A precision trajectory program has been developed to serve as a test bed for geocentric orbit raising steering laws. The steering laws to be evaluated have been developed using optimization methods employing averaging techniques. This program provides the capability of testing the steering laws in a precision simulation. The principal system models incorporated in the program are described, including the radiation environment, the solar array model, the thrusters and power processors, the geopotential, and the solar system. Steering and array orientation constraints are discussed, and the impact of these constraints on program design is considered.
Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations
NASA Technical Reports Server (NTRS)
Rubesin, M. W.
1975-01-01
A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.
Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection
Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole
2016-01-01
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations
Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.
2015-01-01
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.
Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C
2016-02-15
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.
DeSimone, Leslie A.; Walter, Donald A.; Eggleston, John R.; Nimiroski, Mark T.
2002-01-01
Ground water is the primary source of drinking water for towns in the upper Charles River Basin, an area of 105 square miles in eastern Massachusetts that is undergoing rapid growth. The stratified-glacial aquifers in the basin are high yield, but also are thin, discontinuous, and in close hydraulic connection with streams, ponds, and wetlands. Water withdrawals averaged 10.1 million gallons per day in 1989?98 and are likely to increase in response to rapid growth. These withdrawals deplete streamflow and lower pond levels. A study was conducted to develop tools for evaluating water-management alternatives at the regional scale in the basin. Geologic and hydrologic data were compiled and collected to characterize the ground- and surface-water systems. Numerical flow modeling techniques were applied to evaluate the effects of increased withdrawals and altered recharge on ground-water levels, pond levels, and stream base flow. Simulation-optimization methods also were applied to test their efficacy for management of multiple water-supply and water-resource needs. Steady-state and transient ground-water-flow models were developed using the numerical modeling code MODFLOW-2000. The models were calibrated to 1989?98 average annual conditions of water withdrawals, water levels, and stream base flow. Model recharge rates were varied spatially, by land use, surficial geology, and septic-tank return flow. Recharge was changed during model calibration by means of parameter-estimation techniques to better match the estimated average annual base flow; area-weighted rates averaged 22.5 inches per year for the basin. Water withdrawals accounted for about 7 percent of total simulated flows through the stream-aquifer system and were about equal in magnitude to model-calculated rates of ground-water evapotranspiration from wetlands and ponds in aquifer areas. Water withdrawals as percentages of total flow varied spatially and temporally within an average year; maximum values were 12 to 13 percent of total annual flow in some subbasins and of total monthly flow throughout the basin in summer and early fall. Water-management alternatives were evaluated by simulating hypothetical scenarios of increased withdrawals and altered recharge for average 1989?98 conditions with the flow models. Increased withdrawals to maximum State-permitted levels would result in withdrawals of about 15 million gallons per day, or about 50 percent more than current withdrawals. Model-calculated effects of these increased withdrawals included reductions in stream base flow that were greatest (as a percentage of total flow) in late summer and early fall. These reductions ranged from less than 5 percent to more than 60 percent of model-calculated 1989?98 base flow along reaches of the Charles River and major tributaries during low-flow periods. Reductions in base flow generally were comparable to upstream increases in withdrawals, but were slightly less than upstream withdrawals in areas where septic-system return flow was simulated. Increased withdrawals also increased the proportion of wastewater in the Charles River downstream of treatment facilities. The wastewater component increased downstream from a treatment facility in Milford from 80 percent of September base flow under 1989?98 conditions to 90 percent of base flow, and from 18 to 27 percent of September base flow downstream of a treatment facility in Medway. In another set of hypothetical scenarios, additional recharge equal to the transfer of water out of a typical subbasin by sewers was found to increase model-calculated base flows by about 12 percent of model-calculated base flows. Addition of recharge equal to that available from artificial recharge of residential rooftop runoff had smaller effects, augmenting simulated September base flow by about 3 percent. Simulation-optimization methods were applied to an area near Populatic Pond and the confluence of the Mill and Charles Rivers in Franklin,
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-01-01
Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).
Recognition of surgical skills using hidden Markov models
NASA Astrophysics Data System (ADS)
Speidel, Stefanie; Zentek, Tom; Sudra, Gunther; Gehrig, Tobias; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger
2009-02-01
Minimally invasive surgery is a highly complex medical discipline and can be regarded as a major breakthrough in surgical technique. A minimally invasive intervention requires enhanced motor skills to deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To recognize and analyze the current situation for context-aware assistance, we need intraoperative sensor data and a model of the intervention. Characteristics of a situation are the performed activity, the used instruments, the surgical objects and the anatomical structures. Important information about the surgical activity can be acquired by recognizing the surgical gesture performed. Surgical gestures in minimally invasive surgery like cutting, knot-tying or suturing are here referred to as surgical skills. We use the motion data from the endoscopic instruments to classify and analyze the performed skill and even use it for skill evaluation in a training scenario. The system uses Hidden Markov Models (HMM) to model and recognize a specific surgical skill like knot-tying or suturing with an average recognition rate of 92%.
Alagha, Jawad S; Said, Md Azlin Md; Mogheir, Yunes
2014-01-01
Nitrate concentration in groundwater is influenced by complex and interrelated variables, leading to great difficulty during the modeling process. The objectives of this study are (1) to evaluate the performance of two artificial intelligence (AI) techniques, namely artificial neural networks and support vector machine, in modeling groundwater nitrate concentration using scant input data, as well as (2) to assess the effect of data clustering as a pre-modeling technique on the developed models' performance. The AI models were developed using data from 22 municipal wells of the Gaza coastal aquifer in Palestine from 2000 to 2010. Results indicated high simulation performance, with the correlation coefficient and the mean average percentage error of the best model reaching 0.996 and 7 %, respectively. The variables that strongly influenced groundwater nitrate concentration were previous nitrate concentration, groundwater recharge, and on-ground nitrogen load of each land use land cover category in the well's vicinity. The results also demonstrated the merit of performing clustering of input data prior to the application of AI models. With their high performance and simplicity, the developed AI models can be effectively utilized to assess the effects of future management scenarios on groundwater nitrate concentration, leading to more reasonable groundwater resources management and decision-making.
NASA Technical Reports Server (NTRS)
Russell, P. B.; Livingston, J. M.; Hignett, P.; Kinne, S.; Wong, J.; Chien, A.; Bergstrom, R.; Durkee, P.; Hobbs, P. V.
2000-01-01
The Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX) measured a variety of aerosol radiative effects (including flux changes) while simultaneously measuring the chemical, physical, and optical properties of the responsible aerosol particles. Here we use TARFOX-determined aerosol and surface properties to compute shortwave radiative flux changes for a variety of aerosol situations, with midvisible optical depths ranging from 0.06 to 0.55. We calculate flux changes by several techniques with varying degrees of sophistication, in part to investigate the sensitivity of results to computational approach. We then compare computed flux changes to those determined from aircraft measurements. Calculations using several approaches yield downward and upward flux changes that agree with measurements. The agreement demonstrates closure (i.e. consistency) among the TARFOX-derived aerosol properties, modeling techniques, and radiative flux measurements. Agreement between calculated and measured downward flux changes is best when the aerosols are modeled as moderately absorbing (midvisible single-scattering albedos between about 0.89 and 0.93), in accord with independent measurements of the TARPOX aerosol. The calculated values for instantaneous daytime upwelling flux changes are in the range +14 to +48 W/sq m for midvisible optical depths between 0.2 and 0.55. These values are about 30 to 100 times the global-average direct forcing expected for the global-average sulfate aerosol optical depth of 0.04. The reasons for the larger flux changes in TARFOX include the relatively large optical depths and the focus on cloud-free, daytime conditions over the dark ocean surface. These are the conditions that produce major aerosol radiative forcing events and contribute to any global-average climate effect.
Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Zhu, Lianqing; Luo, Fei; Dong, Mingli; Ding, Xiangdong; He, Wei
2016-06-01
A metallic packaging technique of fiber Bragg grating (FBG) sensors is developed for measurement of strain and temperature, and it can be simply achieved via one-step ultrasonic welding. The average strain transfer rate of the metal-packaged sensor is theoretically evaluated by a proposed model aiming at surface-bonded metallic packaging FBG. According to analytical results, the metallic packaging shows higher average strain transfer rate compared with traditional adhesive packaging under the same packaging conditions. Strain tests are performed on an elaborate uniform strength beam for both tensile and compressive strains; strain sensitivities of approximately 1.16 and 1.30 pm/μɛ are obtained for the tensile and compressive situations, respectively. Temperature rising and cooling tests are also executed from 50°C to 200°C, and the sensitivity of temperature is 36.59 pm/°C. All the measurements of strain and temperature exhibit good linearity and stability. These results demonstrate that the metal-packaged sensors can be successfully fabricated by one-step welding technique and provide great promise for long-term and high-precision structural health monitoring.
Controlling Release Kinetics of PLG Microspheres Using a Manufacturing Technique
NASA Astrophysics Data System (ADS)
Berchane, Nader
2005-11-01
Controlled drug delivery offers numerous advantages compared with conventional free dosage forms, in particular: improved efficacy and patient compliance. Emulsification is a widely used technique to entrap drugs in biodegradable microspheres for controlled drug delivery. The size of the formed microspheres has a significant influence on drug release kinetics. Despite the advantages of controlled drug delivery, previous attempts to achieve predetermined release rates have seen limited success. This study develops a tool to tailor desired release kinetics by combining microsphere batches of specified mean diameter and size distribution. A fluid mechanics based correlation that predicts the average size of Poly(Lactide-co-Glycolide) [PLG] microspheres from the manufacturing technique, is constructed and validated by comparison with experimental results. The microspheres produced are accurately represented by the Rosin-Rammler mathematical distribution function. A mathematical model is formulated that incorporates the microsphere distribution function to predict the release kinetics from mono-dispersed and poly-dispersed populations. Through this mathematical model, different release kinetics can be achieved by combining different sized populations in different ratios. The resulting design tool should prove useful for the pharmaceutical industry to achieve designer release kinetics.
O'Hara, R P; Palazotto, A N
2012-12-01
To properly model the structural dynamics of the forewing of the Manduca sexta species, it is critical that the material and structural properties of the biological specimen be understood. This paper presents the results of a morphological study that has been conducted to identify the material and structural properties of a sample of male and female Manduca sexta specimens. The average mass, area, shape, size and camber of the wing were evaluated using novel measurement techniques. Further emphasis is placed on studying the critical substructures of the wing: venation and membrane. The venation cross section is measured using detailed pathological techniques over the entire venation of the wing. The elastic modulus of the leading edge veins is experimentally determined using advanced non-contact structural dynamic techniques. The membrane elastic modulus is randomly sampled over the entire wing to determine global material properties for the membrane using nanoindentation. The data gathered from this morphological study form the basis for the replication of future finite element structural models and engineered biomimetic wings for use with flapping wing micro air vehicles.
Forecasting of global solar radiation using anfis and armax techniques
NASA Astrophysics Data System (ADS)
Muhammad, Auwal; Gaya, M. S.; Aliyu, Rakiya; Aliyu Abdulkadir, Rabi'u.; Dauda Umar, Ibrahim; Aminu Yusuf, Lukuman; Umar Ali, Mudassir; Khairi, M. T. M.
2018-01-01
Procurement of measuring device, maintenance cost coupled with calibration of the instrument contributed to the difficulty in forecasting of global solar radiation in underdeveloped countries. Most of the available regressional and mathematical models do not capture well the behavior of the global solar radiation. This paper presents the comparison of Adaptive Neuro Fuzzy Inference System (ANFIS) and Autoregressive Moving Average with eXogenous term (ARMAX) in forecasting global solar radiation. Full-Scale (experimental) data of Nigerian metrological agency, Sultan Abubakar III international airport Sokoto was used to validate the models. The simulation results demonstrated that the ANFIS model having achieved MAPE of 5.34% outperformed the ARMAX model. The ANFIS could be a valuable tool for forecasting the global solar radiation.
Road traffic accidents prediction modelling: An analysis of Anambra State, Nigeria.
Ihueze, Chukwutoo C; Onwurah, Uchendu O
2018-03-01
One of the major problems in the world today is the rate of road traffic crashes and deaths on our roads. Majority of these deaths occur in low-and-middle income countries including Nigeria. This study analyzed road traffic crashes in Anambra State, Nigeria with the intention of developing accurate predictive models for forecasting crash frequency in the State using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The result showed that ARIMAX model outperformed the ARIMA (1,1,1) model generated when their performances were compared using the lower Bayesian information criterion, mean absolute percentage error, root mean square error; and higher coefficient of determination (R-Squared) as accuracy measures. The findings of this study reveal that incorporating human, vehicle and environmental related factors in time series analysis of crash dataset produces a more robust predictive model than solely using aggregated crash count. This study contributes to the body of knowledge on road traffic safety and provides an approach to forecasting using many human, vehicle and environmental factors. The recommendations made in this study if applied will help in reducing the number of road traffic crashes in Nigeria. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simulating multi-scale oceanic processes around Taiwan on unstructured grids
NASA Astrophysics Data System (ADS)
Yu, Hao-Cheng; Zhang, Yinglong J.; Yu, Jason C. S.; Terng, C.; Sun, Weiling; Ye, Fei; Wang, Harry V.; Wang, Zhengui; Huang, Hai
2017-11-01
We validate a 3D unstructured-grid (UG) model for simulating multi-scale processes as occurred in Northwestern Pacific around Taiwan using recently developed new techniques (Zhang et al., Ocean Modeling, 102, 64-81, 2016) that require no bathymetry smoothing even for this region with prevalent steep bottom slopes and many islands. The focus is on short-term forecast for several months instead of long-term variability. Compared with satellite products, the errors for the simulated Sea-surface Height (SSH) and Sea-surface Temperature (SST) are similar to a reference data-assimilated global model. In the nearshore region, comparison with 34 tide gauges located around Taiwan indicates an average RMSE of 13 cm for the tidal elevation. The average RMSE for SST at 6 coastal buoys is 1.2 °C. The mean transport and eddy kinetic energy compare reasonably with previously published values and the reference model used to provide boundary and initial conditions. The model suggests ∼2-day interruption of Kuroshio east of Taiwan during a typhoon period. The effect of tidal mixing is shown to be significant nearshore. The multi-scale model is easily extendable to target regions of interest due to its UG framework and a flexible vertical gridding system, which is shown to be superior to terrain-following coordinates.
Performance prediction using geostatistics and window reservoir simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontanilla, J.P.; Al-Khalawi, A.A.; Johnson, S.G.
1995-11-01
This paper is the first window model study in the northern area of a large carbonate reservoir in Saudi Arabia. It describes window reservoir simulation with geostatistics to model uneven water encroachment in the southwest producing area of the northern portion of the reservoir. In addition, this paper describes performance predictions that investigate the sweep efficiency of the current peripheral waterflood. A 50 x 50 x 549 (240 m. x 260 m. x 0.15 m. average grid block size) geological model was constructed with geostatistics software. Conditional simulation was used to obtain spatial distributions of porosity and volume of dolomite.more » Core data transforms were used to obtain horizontal and vertical permeability distributions. Simple averaging techniques were used to convert the 549-layer geological model to a 50 x 50 x 10 (240 m. x 260 m. x 8 m. average grid block size) window reservoir simulation model. Flux injectors and flux producers were assigned to the outermost grid blocks. Historical boundary flux rates were obtained from a coarsely-ridded full-field model. Pressure distribution, water cuts, GORs, and recent flowmeter data were history matched. Permeability correction factors and numerous parameter adjustments were required to obtain the final history match. The permeability correction factors were based on pressure transient permeability-thickness analyses. The prediction phase of the study evaluated the effects of infill drilling, the use of artificial lifts, workovers, horizontal wells, producing rate constraints, and tight zone development to formulate depletion strategies for the development of this area. The window model will also be used to investigate day-to-day reservoir management problems in this area.« less
Finger muscle attachments for an OpenSim upper-extremity model.
Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.
Finger Muscle Attachments for an OpenSim Upper-Extremity Model
Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869
NASA Technical Reports Server (NTRS)
1995-01-01
This report contains the 1995 annual progress reports of the Research Fellows and students of the Center for Turbulence Research (CTR). In 1995 CTR continued its concentration on the development and application of large-eddy simulation to complex flows, development of novel modeling concepts for engineering computations in the Reynolds averaged framework, and turbulent combustion. In large-eddy simulation, a number of numerical and experimental issues have surfaced which are being addressed. The first group of reports in this volume are on large-eddy simulation. A key finding in this area was the revelation of possibly significant numerical errors that may overwhelm the effects of the subgrid-scale model. We also commissioned a new experiment to support the LES validation studies. The remaining articles in this report are concerned with Reynolds averaged modeling, studies of turbulence physics and flow generated sound, combustion, and simulation techniques. Fundamental studies of turbulent combustion using direct numerical simulations which started at CTR will continue to be emphasized. These studies and their counterparts carried out during the summer programs have had a noticeable impact on combustion research world wide.
Laparoscopic radical prostatectomy in the canine model.
Price, D T; Chari, R S; Neighbors, J D; Eubanks, S; Schuessler, W W; Preminger, G M
1996-12-01
The purpose of this study was to determine the feasibility of performing laparoscopic radical prostatectomy in a canine model. Laparoscopic radical prostatectomy was performed on six adult male canines. A new endoscopic needle driver was used to construct a secure vesicourethral anastomosis. Average operative time required to complete the procedure was 304 min (range 270-345 min). Dissection of the prostate gland took an average of 67 min (range 35-90 min), and construction of the vesicourethral anastomosis took 154 min (rage 80-240 min). There were no intraoperative complications and only one postoperative complication (anastomotic leak). Five of the six animals recovered uneventfully from the procedure, and their foley catheters were removed 10-14 days postoperatively after a retrograde cystourethrogram demonstrated an intact vesicourethral anastomosis. Four (80%) of the surviving animals were clinically continent within 10 days after catheter removal. Post mortem examination confirmed that the vesicourethral anastomosis was intact with no evidence of urine extravasation. These data demonstrate the feasibility of laparoscopic radical prostatectomy in a canine model, and suggest that additional work with this technique should be continued to develop its potential clinical application.
Modeling methodology for MLS range navigation system errors using flight test data
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.
SToRM: A numerical model for environmental surface flows
Simoes, Francisco J.
2009-01-01
SToRM (System for Transport and River Modeling) is a numerical model developed to simulate free surface flows in complex environmental domains. It is based on the depth-averaged St. Venant equations, which are discretized using unstructured upwind finite volume methods, and contains both steady and unsteady solution techniques. This article provides a brief description of the numerical approach selected to discretize the governing equations in space and time, including important aspects of solving natural environmental flows, such as the wetting and drying algorithm. The presentation is illustrated with several application examples, covering both laboratory and natural river flow cases, which show the model’s ability to solve complex flow phenomena.
Fast method for reactor and feature scale coupling in ALD and CVD
Yanguas-Gil, Angel; Elam, Jeffrey W.
2017-08-08
Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.
NASA Technical Reports Server (NTRS)
Gedeon, D.; Wood, J. G.
1996-01-01
A number of wire mesh and metal felt test samples, with a range of porosities, yield generic correlations for friction factor, Nusselt number, enhanced axial conduction ratio, and overall heat flux ratio. This information is directed primarily toward stirling cycle regenerator modelers, but will be of use to anyone seeking to better model fluid flow through these porous materials. Behind these results lies an oscillating-flow test rig, which measures pumping dissipation and thermal energy transport in sample matrices, and several stages of data-reduction software, which correlate instantaneous values for the above dimensionless groups. Within the software, theoretical model reduces instantaneous quantifies from cycle-averaged measurables using standard parameter estimation techniques.
Modelling, design and stability analysis of an improved SEPIC converter for renewable energy systems
NASA Astrophysics Data System (ADS)
G, Dileep; Singh, S. N.; Singh, G. K.
2017-09-01
In this paper, a detailed modelling and analysis of a switched inductor (SI)-based improved single-ended primary inductor converter (SEPIC) has been presented. To increase the gain of conventional SEPIC converter, input and output side inductors are replaced with SI structures. Design and stability analysis for continuous conduction mode operation of the proposed SI-SEPIC converter has also been presented in this paper. State space averaging technique is used to model the converter and carry out the stability analysis. Performance and stability analysis of closed loop configuration is predicted by observing the open loop behaviour using Nyquist diagram and Nichols chart. System was found to stable and critically damped.
SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, Y; Kadoya, N; Ito, K
Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
Effects of the Tongue-in-Groove Maneuver on Nasal Tip Rotation.
Antunes, Marcelo B; Quatela, Vito C
2018-03-27
Changes in nasal tip rotation is a very common maneuver performed during rhinoplasty. Among the many techniques used to achieve this goal is the tongue-in-groove (TIG). This study addresses the long-term effect of the TIG on the nasal tip rotation 1 year after rhinoplasty. The authors prospectively identified patients who were submitted to a rhinoplasty with a TIG maneuver over a period of 1 year. The angle of rotation was measured along the nostril axis angle. The data was analyzed using the t-test and a linear regression model. Seventeen patients were included. The average preoperative tip rotation was 93.95° (SD, 3.12°). Immediate postoperative tip rotation averaged 114.47° (SD, 3.79°). At the 1-year follow-up appointment, the tip rotation averaged 106.55° (SD, 3.54°). There was a significant loss of rotation at the 1-year postoperative visit (p<0.0001), with an average loss of 7.9° (SD, 3.25°), which amounted to 6.8%. The preoperative rotation didn't affect the amount of loss of rotation (p=0.04). It can be estimated that, for every degree of rotation that is changed at surgery it can be expected to lose 0.35 degrees over the first year. TIG is a more dependable technique than the ones that rely on healing and contraction to obtain rotation. Our data demonstrated a significant loss of rotation during the first year. This suggests that the surgeon needs to slightly overcorrect the tip rotation to account for this loss.
Mushkudiani, Nino A; Hukkelhoven, Chantal W P M; Hernández, Adrián V; Murray, Gordon D; Choi, Sung C; Maas, Andrew I R; Steyerberg, Ewout W
2008-04-01
To describe the modeling techniques used for early prediction of outcome in traumatic brain injury (TBI) and to identify aspects for potential improvements. We reviewed key methodological aspects of studies published between 1970 and 2005 that proposed a prognostic model for the Glasgow Outcome Scale of TBI based on admission data. We included 31 papers. Twenty-four were single-center studies, and 22 reported on fewer than 500 patients. The median of the number of initially considered predictors was eight, and on average five of these were selected for the prognostic model, generally including age, Glasgow Coma Score (or only motor score), and pupillary reactivity. The most common statistical technique was logistic regression with stepwise selection of predictors. Model performance was often quantified by accuracy rate rather than by more appropriate measures such as the area under the receiver-operating characteristic curve. Model validity was addressed in 15 studies, but mostly used a simple split-sample approach, and external validation was performed in only four studies. Although most models agree on the three most important predictors, many were developed on small sample sizes within single centers and hence lack generalizability. Modeling strategies have to be improved, and include external validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y; Nasehi Tehrani, J; Wang, J
Purpose: To develop a Bio-recon technique by incorporating the biomechanical properties of anatomical structures into the deformation-based CBCT reconstruction process. Methods: Bio-recon reconstructs the CBCT by deforming a prior high-quality CT/CBCT using a deformation-vector-field (DVF). The DVF is solved through two alternating steps: 2D–3D deformation and finite-element-analysis based biomechanical modeling. 2D–3D deformation optimizes the DVF through an ‘intensity-driven’ approach, which updates the DVF to minimize intensity mismatches between the acquired projections and the simulated projections from the deformed CBCT. In contrast, biomechanical modeling optimizes the DVF through a ‘biomechanical-feature-driven’ approach, which updates the DVF based on the biophysical properties ofmore » anatomical structures. In general, Biorecon extracts the 2D–3D deformation-optimized DVF at high-contrast structure boundaries, and uses it as the boundary condition to drive biomechanical modeling to optimize the overall DVF, especially at low-contrast regions. The optimized DVF is fed back into the 2D–3D deformation for further optimization, which forms an iterative loop. The efficacy of Bio-recon was evaluated on 11 lung patient cases, each with a prior CT and a new CT. Cone-beam projections were generated from the new CTs to reconstruct CBCTs, which were compared with the original new CTs for evaluation. 872 anatomical landmarks were also manually identified by a clinician on both the prior and new CTs to track the lung motion, which was used to evaluate the DVF accuracy. Results: Using 10 projections for reconstruction, the average (± s.d.) relative errors of reconstructed CBCTs by the clinical FDK technique, the 2D–3D deformation-only technique and Bio-recon were 46.5±5.9%, 12.0±2.3% and 10.4±1.3%, respectively. The average residual errors of DVF-tracked landmark motion by the 2D–3D deformation-only technique and Bio-recon were 5.6±4.3mm and 3.1±2.4mm, respectively. Conclusion: Bio-recon improved accuracy for both the reconstructed CBCT and the DVF. The accurate DVF can benefit multiple clinical practices, such as image-guided adaptive radiotherapy. We acknowledge funding support from the American Cancer Society (RSG-13-326-01-CCE), from the US National Institutes of Health (R01 EB020366), and from the Cancer Prevention and Research Institute of Texas (RP130109).« less
Vancamp, Tim; Levy, Robert M; Peña, Isaac; Pajuelo, Antonio
2017-10-01
While dorsal root ganglion (DRG) stimulation has been available in Europe and Australia for the past five years and in the United States for the past year, there are no published details concerning the optimal procedures for DRG lead implantation. We describe several techniques that can be applied to implant cylindrical leads over the DRG, highlighting some tips and tricks according to our experiences. Focus is mainly shifted toward implantations in the lumbar area. We furthermore give some insights in the results we experienced in Spain as well as some worldwide numbers. A 14-gauge needle is placed using a "2-Level Technique (2-LT)" or exceptionally a "1-Level Technique (1-LT)" or a "Primary- or Secondary Technique" at the level of L5. The delivery sheath, loaded with the lead, is advanced toward the targeted neural foramen. The lead is placed over the dorsal aspect of the DRG. A strain relief loop is created in the epidural space. Sheath and needle are retracted and the lead is secured using an anchor or anchorless technique. In Spain, 87.2% (N = 78) of the selected patients have been successfully implanted. Seven (8.9%) had a negative trial and three (4.2%) were explanted. Average VAS score decreased from 8.8 to 3.3 and on average 94.5% of the pain area was covered. In our center's subjects (N = 47 patients, 60.3% of all implanted patients in Spain), VAS scores decreased from an average of 8.8-1.7 and pain coverage averaged 96.4%. We used an average of 1.8 electrodes. Worldwide more than 4000 permanent cases have been successfully performed. We present implantation techniques whereby a percutaneous lead is placed over the DRG through the use of a special designed delivery sheath. Further investigation of the safety, efficacy, and sustainability of clinical outcomes using these devices is warranted. © 2017 International Neuromodulation Society.
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Kerber, A. G.; Sellers, P. J.
1993-01-01
Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.
Energy expenditure for massage therapists during performing selected classical massage techniques.
Więcek, Magdalena; Szymura, Jadwiga; Maciejczyk, Marcin; Szyguła, Zbigniew; Cempla, Jerzy; Borkowski, Mateusz
2018-04-11
The aim of the study is to evaluate the intensity of the effort and energy expenditure in the course of performing selected classical massage techniques and to assess the workload of a massage therapist during a work shift. Thirteen massage therapists (age: 21.9±1.9 years old, body mass index: 24.5±2.8 kg×m-2, maximal oxygen consumption × body mass-1 (VO2 max×BM-1): 42.3±7 ml×kg-1×min-1) were involved in the study. The stress test consisted in performing selected classical massage techniques in the following order: stroking, kneading, shaking, beating, rubbing and direct vibration, during which the cardio-respiratory responses and the subjective rating of perceived exertion (RPE) were assessed. Intensity of exercise during each massage technique was expressed as % VO2 max, % maximal heart rate (HRmax) and % heart rate reserve (HRR). During each massage technique, net energy expenditure (EE) and energy cost of work using metabolic equivalent of task (MET) were determined. The intensity of exercise was 47.2±6.2% as expressed in terms of % VO2 max, and 74.7±3.2% as expressed in terms of % HRmax, while it was 47.8±1.7% on average when expressed in terms of % HRR during the whole procedure. While performing the classical massage, the average EE and MET were 5.6±0.9 kcal×min-1 and 5.6±0.2, respectively. The average RPE calculated for the entire procedure was 12.1±1.4. During the performance of a classical massage technique for a single treatment during the study, the average total EE was 176.5±29.6 kcal, resulting in an energy expenditure of 336.2±56.4 kcal×h-1. In the case of the classical massage technique, rubbing was the highest intensity exercise for the masseur who performed the massage (%VO2 max = 57.4±13.1%, HRmax = 79.6±7.7%, HRR = 58.5±13.1%, MET = 6.7±1.1, EE = 7.1±1.4 kcal×min-1, RPE = 13.4±1.3). In the objective assessment, physical exercise while performing a single classical massage is characterized by hard work. The technique of classical massage during which the masseur performs the highest exercise intensity is rubbing. According to the classification of work intensity based on energy expenditure, the masseur's work is considered heavy during the whole work shift. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes
Gabitto, Jorge; Tsouris, Costas
2018-01-19
Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less
A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations
NASA Astrophysics Data System (ADS)
Esparza, F.
2005-05-01
An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.
One- and Two-Equation Models to Simulate Ion Transport in Charged Porous Electrodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabitto, Jorge; Tsouris, Costas
Energy storage in porous capacitor materials, capacitive deionization (CDI) for water desalination, capacitive energy generation, geophysical applications, and removal of heavy ions from wastewater streams are some examples of processes where understanding of ionic transport processes in charged porous media is very important. In this work, one- and two-equation models are derived to simulate ionic transport processes in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two-step volume averaging technique is used to derive the averaged transportmore » equations for multi-ionic systems without any further assumptions, such as thin electrical double layers or Donnan equilibrium. A comparison between both models is presented. The effective transport parameters for isotropic porous media are calculated by solving the corresponding closure problems. An approximate analytical procedure is proposed to solve the closure problems. Numerical and theoretical calculations show that the approximate analytical procedure yields adequate solutions. Lastly, a theoretical analysis shows that the value of interphase pseudo-transport coefficients determines which model to use.« less
NASA Technical Reports Server (NTRS)
Franklin, Janet; Simonett, David
1988-01-01
The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.
Hybrid LES/RANS Simulation of Transverse Sonic Injection into a Mach 2 Flow
NASA Technical Reports Server (NTRS)
Boles, John A.; Edwards, Jack R.; Baurle, Robert A.
2008-01-01
A computational study of transverse sonic injection of air and helium into a Mach 1.98 cross-flow is presented. A hybrid large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) turbulence model is used, with the two-equation Menter baseline (Menter-BSL) closure for the RANS part of the flow and a Smagorinsky-type model for the LES part of the flow. A time-dependent blending function, dependent on modeled turbulence variables, is used to shift the closure from RANS to LES. Turbulent structures are initiated and sustained through the use of a recycling / rescaling technique. Two higher-order discretizations, the Piecewise Parabolic Method (PPM) of Colella and Woodward, and the SONIC-A ENO scheme of Suresh and Huyhn are used in the study. The results using the hybrid model show reasonably good agreement with time-averaged Mie scattering data and with experimental surface pressure distributions, even though the penetration of the jet into the cross-flow is slightly over-predicted. The LES/RANS results are used to examine the validity of commonly-used assumptions of constant Schmidt and Prandtl numbers in the intense mixing zone downstream of the injection location.
Omanović, Dario; Pižeta, Ivanka; Vukosav, Petra; Kovács, Elza; Frančišković-Bilinski, Stanislav; Tamás, János
2015-04-01
The distribution and speciation of elements along a stream subjected to neutralised acid mine drainage (NAMD) effluent waters (Mátra Mountain, Hungary; Toka stream) were studied by a multi-methodological approach: dissolved and particulate fractions of elements were determined by HR-ICPMS, whereas speciation was carried out by DGT, supported by speciation modelling performed by Visual MINTEQ. Before the NAMD discharge, the Toka is considered as a pristine stream, with averages of dissolved concentrations of elements lower than world averages. A considerable increase of element concentrations caused by effluent water inflow is followed by a sharp or gradual concentration decrease. A large difference between total and dissolved concentrations was found for Fe, Al, Pb, Cu, Zn and As in effluent water and at the first downstream site, with high correlation factors between elements in particulate fraction, indicating their common behaviour, governed by the formation of ferri(hydr)oxides (co)precipitates. In-situ speciation by the DGT technique revealed that Zn, Cd, Ni, Co, Mn and U were predominantly present as a labile, potentially bioavailable fraction (>90%). The formation of strong complexes with dissolved organic matter (DOM) resulted in a relatively low DGT-labile concentration of Cu (42%), while low DGT-labile concentrations of Fe (5%) and Pb (12%) were presumably caused by their existence in colloidal (particulate) fraction which is not accessible to DGT. Except for Fe and Pb, a very good agreement between DGT-labile concentrations and those predicted by the applied speciation model was obtained, with an average correlation factor of 0.96. This study showed that the in-situ DGT technique in combination with model-predicted speciation and classical analysis of samples could provide a reasonable set of data for the assessment of the water quality status (WQS), as well as for the more general study of overall behaviour of the elements in natural waters subjected to high element loads. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Zhao, Cunlu; Klaassen, Aram; van den Ende, Dirk; Mugele, Frieder; Siretanu, Igor
2016-02-01
Most solid surfaces, in particular clay minerals and rock surfaces, acquire a surface charge upon exposure to an aqueous environment due to adsorption and/or desorption of ionic species. Macroscopic techniques such as titration and electrokinetic measurements are commonly used to determine the surface charge and ζ -potential of these surfaces. However, because of the macroscopic averaging character these techniques cannot do justice to the role of local heterogeneities on the surfaces. In this work, we use dynamic atomic force microscopy (AFM) to determine the distribution of surface charge on the two (gibbsite-like and silica-like) basal planes of kaolinite nanoparticles immersed in aqueous electrolyte with a lateral resolution of approximately 30 nm. The surface charge density is extracted from force-distance curves using DLVO theory in combination with surface complexation modeling. While the gibbsite-like and the silica-like facet display on average positive and negative surface charge values as expected, our measurements reveal lateral variations of more than a factor of two on seemingly atomically smooth terraces, even if high resolution AFM images clearly reveal the atomic lattice on the surface. These results suggest that simple surface complexation models of clays that attribute a unique surface chemistry and hence homogeneous surface charge densities to basal planes may miss important aspects of real clay surfaces.
Estimating the vibration level of an L-shaped beam using power flow techniques
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.
1986-01-01
The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.
NASA Astrophysics Data System (ADS)
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zhe Jay; Bongiorni, Paul; Nath, Ravinder
Purpose: Although several dosimetric characterizations using Monte Carlo simulation and thermoluminescent dosimetry (TLD) have been reported for the new Advantage Pd-103 source (IsoAid, LLC, Port Richey, FL), no AAPM consensus value has been established for the dosimetric parameters of the source. The aim of this work was to perform an additional dose-rate constant ({Lambda}) determination using a recently established photon spectrometry technique (PST) that is independent of the published TLD and Monte Carlo techniques. Methods: Three Model IAPD-103A Advantage Pd-103 sources were used in this study. The relative photon energy spectrum emitted by each source along the transverse axis wasmore » measured using a high-resolution germanium spectrometer designed for low-energy photons. For each source, the dose-rate constant was determined from its emitted energy spectrum. The PST-determined dose-rate constant ({sub PST}{Lambda}) was then compared to those determined by TLD ({sub TLD}{Lambda}) and Monte Carlo ({sub MC}{Lambda}) techniques. A likely consensus {Lambda} value was estimated as the arithmetic mean of the average {Lambda} values determined by each of three different techniques. Results: The average {sub PST}{Lambda} value for the three Advantage sources was found to be (0.676{+-}0.026) cGyh{sup -1} U{sup -1}. Intersource variation in {sub PST}{Lambda} was less than 0.01%. The {sub PST}{Lambda} was within 2% of the reported {sub MC}{Lambda} values determined by PTRAN, EGSnrc, and MCNP5 codes. It was 3.4% lower than the reported {sub TLD}{Lambda}. A likely consensus {Lambda} value was estimated to be (0.688{+-}0.026) cGyh{sup -1} U{sup -1}, similar to the AAPM consensus values recommended currently for the Theragenics (Buford, GA) Model 200 (0.686{+-}0.033) cGyh{sup -1} U{sup -1}, the NASI (Chatsworth, CA) Model MED3633 (0.688{+-}0.033) cGyh{sup -1} U{sup -1}, and the Best Medical (Springfield, VA) Model 2335 (0.685{+-}0.033) cGyh{sup -1} U{sup -1} {sup 103}Pd sources. Conclusions: An independent {Lambda} determination has been performed for the Advantage Pd-103 source. The {sub PST}{Lambda} obtained in this work provides additional information needed for establishing a more accurate consensus {Lambda} value for the Advantage Pd-103 source.« less
Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.
de Moura, Karina de O A; Balbinot, Alexandre
2018-05-01
A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.
Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System
Balbinot, Alexandre
2018-01-01
A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994
Khaki, M; Forootan, E; Kuhn, M; Awange, J; Papa, F; Shum, C K
2018-06-01
Climate change can significantly influence terrestrial water changes around the world particularly in places that have been proven to be more vulnerable such as Bangladesh. In the past few decades, climate impacts, together with those of excessive human water use have changed the country's water availability structure. In this study, we use multi-mission remotely sensed measurements along with a hydrological model to separately analyze groundwater and soil moisture variations for the period 2003-2013, and their interactions with rainfall in Bangladesh. To improve the model's estimates of water storages, terrestrial water storage (TWS) data obtained from the Gravity Recovery And Climate Experiment (GRACE) satellite mission are assimilated into the World-Wide Water Resources Assessment (W3RA) model using the ensemble-based sequential technique of the Square Root Analysis (SQRA) filter. We investigate the capability of the data assimilation approach to use a non-regional hydrological model for a regional case study. Based on these estimates, we investigate relationships between the model derived sub-surface water storage changes and remotely sensed precipitations, as well as altimetry-derived river level variations in Bangladesh by applying the empirical mode decomposition (EMD) method. A larger correlation is found between river level heights and rainfalls (78% on average) in comparison to groundwater storage variations and rainfalls (57% on average). The results indicate a significant decline in groundwater storage (∼32% reduction) for Bangladesh between 2003 and 2013, which is equivalent to an average rate of 8.73 ± 2.45mm/year. Copyright © 2018 Elsevier B.V. All rights reserved.
Validity of Postexercise Measurements to Estimate Peak VO2 in 200-m and 400-m Maximal Swims.
Rodríguez, Ferran A; Chaverri, Diego; Iglesias, Xavier; Schuller, Thorsten; Hoffmann, Uwe
2017-06-01
To assess the validity of postexercise measurements to estimate oxygen uptake (V˙O 2 ) during swimming, we compared V˙O 2 measured directly during an all-out 200-m swim with measurements estimated during 200-m and 400-m maximal tests using several methods, including a recent heart rate (HR)/V˙O 2 modelling procedure. 25 elite swimmers performed a 200-m maximal swim where V˙O 2 was measured using a swimming snorkel connected to a gas analyzer. The criterion variable was V˙O 2 in the last 20 s of effort, which was compared with the following V˙O 2peak estimates: 1) first 20-s average; 2) linear backward extrapolation (BE) of the first 20 and 30 s, 3×20-s, 4×20-s, and 3×20-s or 4×20-s averages; 3) semilogarithmic BE at the same intervals; and 4) predicted V˙O 2peak using mathematical modelling of 0-20 s and 5-20 s during recovery. In 2 series of experiments, both of the HR/V˙O 2 modelled values most accurately predicted the V˙O 2peak (mean ∆=0.1-1.6%). The BE methods overestimated the criterion values by 4-14%, and the single 20-s measurement technique yielded an underestimation of 3.4%. Our results confirm that the HR/V˙O 2 modelling technique, used over a maximal 200-m or 400-m swim, is a valid and accurate procedure for assessing cardiorespiratory and metabolic fitness in competitive swimmers. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Herrington, C.; Gonzalez-Pinzon, R.
2014-12-01
Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.
Motion patterns in acupuncture needle manipulation.
Seo, Yoonjeong; Lee, In-Seon; Jung, Won-Mo; Ryu, Ho-Sun; Lim, Jinwoong; Ryu, Yeon-Hee; Kang, Jung-Won; Chae, Younbyoung
2014-10-01
In clinical practice, acupuncture manipulation is highly individualised for each practitioner. Before we establish a standard for acupuncture manipulation, it is important to understand completely the manifestations of acupuncture manipulation in the actual clinic. To examine motion patterns during acupuncture manipulation, we generated a fitted model of practitioners' motion patterns and evaluated their consistencies in acupuncture manipulation. Using a motion sensor, we obtained real-time motion data from eight experienced practitioners while they conducted acupuncture manipulation using their own techniques. We calculated the average amplitude and duration of a sampled motion unit for each practitioner and, after normalisation, we generated a true regression curve of motion patterns for each practitioner using a generalised additive mixed modelling (GAMM). We observed significant differences in rotation amplitude and duration in motion samples among practitioners. GAMM showed marked variations in average regression curves of motion patterns among practitioners but there was strong consistency in motion parameters for individual practitioners. The fitted regression model showed that the true regression curve accounted for an average of 50.2% of variance in the motion pattern for each practitioner. Our findings suggest that there is great inter-individual variability between practitioners, but remarkable intra-individual consistency within each practitioner. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Lee, Jonathan K.; Froehlich, David C.
1987-01-01
Published literature on the application of the finite-element method to solving the equations of two-dimensional surface-water flow in the horizontal plane is reviewed in this report. The finite-element method is ideally suited to modeling two-dimensional flow over complex topography with spatially variable resistance. A two-dimensional finite-element surface-water flow model with depth and vertically averaged velocity components as dependent variables allows the user great flexibility in defining geometric features such as the boundaries of a water body, channels, islands, dikes, and embankments. The following topics are reviewed in this report: alternative formulations of the equations of two-dimensional surface-water flow in the horizontal plane; basic concepts of the finite-element method; discretization of the flow domain and representation of the dependent flow variables; treatment of boundary conditions; discretization of the time domain; methods for modeling bottom, surface, and lateral stresses; approaches to solving systems of nonlinear equations; techniques for solving systems of linear equations; finite-element alternatives to Galerkin's method of weighted residuals; techniques of model validation; and preparation of model input data. References are listed in the final chapter.
Prasarn, Mark L; Meyers, Kathleen N; Wilkin, Geoffrey; Wellman, David S; Chan, Daniel B; Ahn, Jaimo; Lorich, Dean G; Helfet, David L
2015-12-01
We sought to evaluate clinical and biomechanical outcomes of dual mini-fragment plate fixation for clavicle fractures. We hypothesized that this technique would produce an anatomical reduction with good clinical outcomes, be well tolerated by patients, and demonstrate equivalent biomechanics to single plating. Dual mini-fragment plating was performed for 17 isolated, displaced midshaft clavicle fractures. Functional outcomes and complications were retrospectively reviewed. A sawbones model compared dual plating biomechanics to a (1) superior 3.5-mm locking reconstruction plate, or (2) antero-inferior 3.5-mm locking reconstruction plate. On biomechanical testing, with anterior loading, dual plating was significantly more rigid than single locked anterior-plating (p = 0.02) but less rigid than single locked superior-plating (p = 0.001). With superior loading, dual plating trended toward higher rigidity versus single locked superior-plating (p = 0.07) but was less rigid than single locked anterior-plating (p = 0.03). No statistically significant differences in axial loading (p = 0.27) or torsion (p = 0.23) were detected. Average patient follow-up was 16.1 months (12-38). Anatomic reduction was achieved and maintained through final healing (average 14.7 weeks). No patient underwent hardware removal. Average 1-year DASH score was 4.0 (completed in 88 %). Displaced midshaft clavicle fractures can be effectively managed with dual mini-fragment plating. This technique results in high union rates and excellent clinical outcomes. Compared to single plating, dual plating is biomechanically equivalent in axial loading and torsion, yet offers better multi-planar bending stiffness despite the use of smaller plates. This technique may decrease the need for secondary surgery due to implant prominence and may aid in fracture reduction by buttressing butterfly fragments in two planes.
Luo, Wenbin; Huang, Lanfeng; Liu, He; Qu, Wenrui; Zhao, Xin; Wang, Chenyu; Li, Chen; Yu, Tao; Han, Qing; Wang, Jincheng; Qin, Yanguo
2017-04-07
BACKGROUND We explored the application of 3-dimensional (3D) printing technology in treating giant cell tumors (GCT) of the proximal tibia. A tibia block was designed and produced through 3D printing technology. We expected that this 3D-printed block would fill the bone defect after en-bloc resection. Importantly, the block, combined with a standard knee joint prosthesis, provided attachments for collateral ligaments of the knee, which can maintain knee stability. MATERIAL AND METHODS A computed tomography (CT) scan was taken of both knee joints in 4 patients with GCT of the proximal tibia. We developed a novel technique - the real-size 3D-printed proximal tibia model - to design preoperative treatment plans. Hence, with the application of 3D printing technology, a customized proximal tibia block could be designed for each patient individually, which fixed the bone defect, combined with standard knee prosthesis. RESULTS In all 4 cases, the 3D-printed block fitted the bone defect precisely. The motion range of the affected knee was 90 degrees on average, and the soft tissue balance and stability of the knee were good. After an average 7-month follow-up, the MSTS score was 19 on average. No sign of prosthesis fracture, loosening, or other relevant complications were detected. CONCLUSIONS This technique can be used to treat GCT of the proximal tibia when it is hard to achieve soft tissue balance after tumor resection. 3D printing technology simplified the design and manufacturing progress of custom-made orthopedic medical instruments. This new surgical technique could be much more widely applied because of 3D printing technology.
Jeon, Young-Chan; Jeong, Chang-Mo
2017-01-01
PURPOSE The purpose of this study was to compare the fit of cast gold crowns fabricated from the conventional and the digital impression technique. MATERIALS AND METHODS Artificial tooth in a master model and abutment teeth in ten patients were restored with cast gold crowns fabricated from the digital and the conventional impression technique. The forty silicone replicas were cut in three sections; each section was evaluated in nine points. The measurement was carried out by using a measuring microscope and I-Soultion. Data from the silicone replica were analyzed and all tests were performed with α-level of 0.05. RESULTS 1. The average gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. 2. In marginal and internal axial gap of cast gold crowns, no statistical differences were found between the two impression techniques. 3. The internal occlusal gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. CONCLUSION Both prostheses presented clinically acceptable results with comparing the fit. The prostheses fabricated from the digital impression technique showed more gaps, in respect of occlusal surface. PMID:28243386
Local TEC modelling and forecasting using neural networks
NASA Astrophysics Data System (ADS)
Tebabal, A.; Radicella, S. M.; Nigussie, M.; Damtie, B.; Nava, B.; Yizengaw, E.
2018-07-01
Modelling the Earth's ionospheric characteristics is the focal task for the ionospheric community to mitigate its effect on the radio communication, and satellite navigation. However, several aspects of modelling are still challenging, for example, the storm time characteristics. This paper presents modelling efforts of TEC taking into account solar and geomagnetic activity, time of the day and day of the year using neural networks (NNs) modelling technique. The NNs have been designed with GPS-TEC measured data from low and mid-latitude GPS stations. The training was conducted using the data obtained for the period from 2011 to 2014. The model prediction accuracy was evaluated using data of year 2015. The model results show that diurnal and seasonal trend of the GPS-TEC is well reproduced by the model for the two stations. The seasonal characteristics of GPS-TEC is compared with NN and NeQuick 2 models prediction when the latter one is driven by the monthly average value of solar flux. It is found that NN model performs better than the corresponding NeQuick 2 model for low latitude region. For the mid-latitude both NN and NeQuick 2 models reproduce the average characteristics of TEC variability quite successfully. An attempt of one day ahead forecast of TEC at the two locations has been made by introducing as drivers previous day solar flux and geomagnetic index values. The results show that a reasonable day ahead forecast of local TEC can be achieved.
Local TEC Modelling and Forecasting using Neural Networks
NASA Astrophysics Data System (ADS)
Tebabal, A.; Radicella, S. M.; Nigussie, M.; Damtie, B.; Nava, B.; Yizengaw, E.
2017-12-01
Abstract Modelling the Earth's ionospheric characteristics is the focal task for the ionospheric community to mitigate its effect on the radio communication, satellite navigation and technologies. However, several aspects of modelling are still challenging, for example, the storm time characteristics. This paper presents modelling efforts of TEC taking into account solar and geomagnetic activity, time of the day and day of the year using neural networks (NNs) modelling technique. The NNs have been designed with GPS-TEC measured data from low and mid-latitude GPS stations. The training was conducted using the data obtained for the period from 2011 to 2014. The model prediction accuracy was evaluated using data of year 2015. The model results show that diurnal and seasonal trend of the GPS-TEC is well reproduced by the model for the two stations. The seasonal characteristics of GPS-TEC is compared with NN and NeQuick 2 models prediction when the latter one is driven by the monthly average value of solar flux. It is found that NN model performs better than the corresponding NeQuick 2 model for low latitude region. For the mid-latitude both NN and NeQuick 2 models reproduce the average characteristics of TEC variability quite successfully. An attempt of one day ahead forecast of TEC at the two locations has been made by introducing as driver previous day solar flux and geomagnetic index values. The results show that a reasonable day ahead forecast of local TEC can be achieved.
E-Learning Development Process for Operating System Course in Vocational School
NASA Astrophysics Data System (ADS)
Tuna, J. R.; Manoppo, C. T. M.; Kaparang, D. R.; Mewengkang, A.
2018-02-01
This development research aims to produce learning media in the form of E- Learning media using Edmodo which is interesting, efficient and effective on the subjects of operating system for students of class X TKJ in SMKN 3 Manado. The development model used was developed by S. Thiagarajan et al., Often known as the Four-D model, but this research only uses (define, design, and develop). Trial of the product is done twice (limited and wide). The experimental design used was the before-after experimental design. Data collection techniques used are interview techniques, questionnaires, and tests. The analytical technique used in this development research is descriptive qualitative. These include analysis of attractiveness test, efficiency and effectiveness of E-Learning media using Edmodo. The media attractiveness test was measured using a student response questionnaire. Media efficiency test was obtained through interviews of researchers with operating system subjects teachers and students of class X TKJ 1 at SMKN 3 Manado. While the media effectiveness test obtained from student learning outcomes before and after applying E-Learning media using Edomodo. Then tested by paired sample t test formula. After the media was piloted on the subject of trials (limited and broad), and the results show that E-Learning media using Edmodo is interesting, efficient and effective. It is shown on average student response score of 88.15% with very interesting interpretation. While the average value of student learning outcomes increased from 76.33 to 82.93. The results of differential test (paired sample t-test) the value of t = 11 217 ≥ ttable = 2,045 with significant value = 0.000 <α = 0.050 showing the media E -Learning using Edmodo is effective.
Numerical analysis of the wake of a 10kW HAWT
NASA Astrophysics Data System (ADS)
Gong, S. G.; Deng, Y. B.; Xie, G. L.; Zhang, J. P.
2017-01-01
With the rising of wind power industry and the ever-growing scale of wind farm, the research for the wake performance of wind turbine has an important guiding significance for the overall arrangement of wind turbines in the large wind farm. The wake simulation model of 10kW horizontal-axis wind turbine is presented on the basis of Averaged Navier-Stokes (RANS) equations and the RNG k-ε turbulence model for applying to the rotational fluid flow. The sliding mesh technique in ANSYS CFX software is used to solve the coupling equation of velocity and pressure. The characters of the average velocity in the wake zone under rated inlet wind speed and different rotor rotational speeds have been investigated. Based on the analysis results, it is proposed that the horizontal spacing between the wind turbines is less than two times radius of rotor, and its longitudinal spacing is less than five times of radius. And other results have also been obtained, which are of great importance for large wind farms.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
NASA Astrophysics Data System (ADS)
Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Yusof, Muhammad Mat
2016-08-01
This paper reports the effect of proposed software products features on the satisfaction and dissatisfaction of potential customers of proposed software products. Kano model's functional and dysfunctional technique was used along with Berger et al.'s customer satisfaction coefficients. The result shows that only two features performed the most in influencing the satisfaction and dissatisfaction of would-be customers of the proposed software product. Attractive and one-dimensional features had the highest impact on the satisfaction and dissatisfaction of customers. This result will benefit requirements analysts, developers, designers, projects and sales managers in preparing for proposed products. Additional analysis showed that the Kano model's satisfaction and dissatisfaction scores were highly related to the Park et al.'s average satisfaction coefficient (r=96%), implying that these variables can be used interchangeably or in place of one another to elicit customer satisfaction. Furthermore, average satisfaction coefficients and satisfaction and dissatisfaction indexes were all positively and linearly correlated.
NASA Technical Reports Server (NTRS)
Shih, T. I. P.; Yang, S. L.; Schock, H. J.
1986-01-01
A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.
NASA Technical Reports Server (NTRS)
Shih, T. I-P.; Yang, S. L.; Schock, H. J.
1986-01-01
A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.
Downward Atmospheric Longwave Radiation in the City of Sao Paulo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbaro, Eduardo W.; Oliveira, Amauri P.; Soares, Jacyra
2009-03-11
This work evaluates objectively the consistency and quality of a 9 year dataset based on 5 minute average values of downward longwave atmospheric (LW) emission, shortwave radiation, temperature and relative humidity. All these parameters were observed simultaneously and continuously from 1997 to 2006 in the IAG micrometeorological platform, located at the top of the IAG-USP building. The pyrgeometer dome emission effect was removed using neural network technique reducing the downward long wave atmospheric emission error to 3.5%. The comparison, between the monthly average values of LW emission observed in Sao Paulo and satellite estimates from SRB-NASA project, indicated a verymore » good agreement. Furthermore, this work investigates the performance of 10 empirical expressions to estimate the LW emission at the surface. The comparison between the models indicates that Brunt's one presents the better results, with smallest ''MBE,''''RMSE'' and biggest ''d'' index of agreement, therefore Brunt is the most indicated model to estimate LW emission under clear sky conditions in the city of Sao Paulo.« less
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Forecasting air quality time series using deep learning.
Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse
2018-04-13
This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.
NASA Astrophysics Data System (ADS)
Seo, Young Wook; Yoon, Seung Chul; Park, Bosoon; Hinton, Arthur; Windham, William R.; Lawrence, Kurt C.
2013-05-01
Salmonella is a major cause of foodborne disease outbreaks resulting from the consumption of contaminated food products in the United States. This paper reports the development of a hyperspectral imaging technique for detecting and differentiating two of the most common Salmonella serotypes, Salmonella Enteritidis (SE) and Salmonella Typhimurium (ST), from background microflora that are often found in poultry carcass rinse. Presumptive positive screening of colonies with a traditional direct plating method is a labor intensive and time consuming task. Thus, this paper is concerned with the detection of differences in spectral characteristics among the pure SE, ST, and background microflora grown on brilliant green sulfa (BGS) and xylose lysine tergitol 4 (XLT4) agar media with a spread plating technique. Visible near-infrared hyperspectral imaging, providing the spectral and spatial information unique to each microorganism, was utilized to differentiate SE and ST from the background microflora. A total of 10 classification models, including five machine learning algorithms, each without and with principal component analysis (PCA), were validated and compared to find the best model in classification accuracy. The five machine learning (classification) algorithms used in this study were Mahalanobis distance (MD), k-nearest neighbor (kNN), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM). The average classification accuracy of all 10 models on a calibration (or training) set of the pure cultures on BGS agar plates was 98% (Kappa coefficient = 0.95) in determining the presence of SE and/or ST although it was difficult to differentiate between SE and ST. The average classification accuracy of all 10 models on a training set for ST detection on XLT4 agar was over 99% (Kappa coefficient = 0.99) although SE colonies on XLT4 agar were difficult to differentiate from background microflora. The average classification accuracy of all 10 models on a validation set of chicken carcass rinses spiked with SE or ST and incubated on BGS agar plates was 94.45% and 83.73%, without and with PCA for classification, respectively. The best performing classification model on the validation set was QDA without PCA by achieving the classification accuracy of 98.65% (Kappa coefficient=0.98). The overall best performing classification model regardless of using PCA was MD with the classification accuracy of 94.84% (Kappa coefficient=0.88) on the validation set.
NASA Technical Reports Server (NTRS)
Tiwari, Anil
1995-01-01
Research effort was directed towards developing a near real-time, acousto-ultrasonic (AU), nondestructive evaluation (NDE) tool to study the failure mechanisms of ceramic composites. Progression of damage is monitored in real-time by observing the changes in the received AU signal during the actual test. During the real-time AU test, the AU signals are generated and received by the AU transducers attached to the specimen while it is being subjected to increasing quasi-static loads or cyclic loads (10 Hz, R = 1.0). The received AU signals for 64 successive pulses were gated in the time domain (T = 40.96 micro sec) and then averaged every second over ten load cycles and stored in a computer file during fatigue tests. These averaged gated signals are representative of the damage state of the specimen at that point of its fatigue life. This is also the first major attempt in the development and application of real-time AU for continuously monitoring damage accumulation during fatigue without interrupting the test. The present work has verified the capability of the AU technique to assess the damage state in silicon carbide/calcium aluminosilicate (SiC/CAS) and silicon carbide/ magnesium aluminosilicate (SiC/MAS) ceramic composites. Continuous monitoring of damage initiation and progression under quasi-static ramp loading in tension to failure of unidirectional and cross-ply SiC/CAS and quasi-isotropic SiC/MAS ceramic composite specimens at room temperature was accomplished using near real-time AU parameters. The AU technique was shown to be able to detect the stress levels for the onset and saturation of matrix cracks, respectively. The critical cracking stress level is used as a design stress for brittle matrix composites operating at elevated temperatures. The AU technique has found that the critical cracking stress level is 10-15% below the level presently obtained for design purposes from analytical models. An acousto-ultrasonic stress-strain response (AUSSR) model for unidirectional and cross-ply ceramic composites was formulated. The AUSSR model predicts the strain response to increasing stress levels using real-time AU data and classical laminated plate theory. The Weibull parameters of the AUSSR model are used to calculate the design stress for thermo-structural applications. Real-time AU together with the AUSSR model was used to study the failure mechanisms of SiC/CAS ceramic composites under static and fatigue loading. An S-N curve was generated for a cross-ply SiC/CAS ceramic composite material. The AU results are corroborated and complemented by other NDE techniques, namely, in-situ optical microscope video recordings and edge replication.
Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu
2016-12-01
Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Modeling Patterns of Total Dissolved Solids Release from Central Appalachia, USA, Mine Spoils.
Clark, Elyse V; Zipper, Carl E; Daniels, W Lee; Orndorff, Zenah W; Keefe, Matthew J
2017-01-01
Surface mining in the central Appalachian coalfields (USA) influences water quality because the interaction of infiltrated waters and O with freshly exposed mine spoils releases elevated levels of total dissolved solids (TDS) to streams. Modeling and predicting the short- and long-term TDS release potentials of mine spoils can aid in the management of current and future mining-influenced watersheds and landscapes. In this study, the specific conductance (SC, a proxy variable for TDS) patterns of 39 mine spoils during a sequence of 40 leaching events were modeled using a five-parameter nonlinear regression. Estimated parameter values were compared to six rapid spoil assessment techniques (RSATs) to assess predictive relationships between model parameters and RSATs. Spoil leachates reached maximum values, 1108 ± 161 μS cm on average, within the first three leaching events, then declined exponentially to a breakpoint at the 16th leaching event on average. After the breakpoint, SC release remained linear, with most spoil samples exhibiting declines in SC release with successive leaching events. The SC asymptote averaged 276 ± 25 μS cm. Only three samples had SCs >500 μS cm at the end of the 40 leaching events. Model parameters varied with mine spoil rock and weathering type, and RSATs were predictive of four model parameters. Unweathered samples released higher SCs throughout the leaching period relative to weathered samples, and rock type influenced the rate of SC release. The RSATs for SC, total S, and neutralization potential may best predict certain phases of mine spoil TDS release. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
3D turbulence measurements in inhomogeneous boundary layers with three wind LiDARs
NASA Astrophysics Data System (ADS)
Carbajo Fuertes, Fernando; Valerio Iungo, Giacomo; Porté-Agel, Fernando
2014-05-01
One of the most challenging tasks in atmospheric anemometry is obtaining reliable turbulence measurements of inhomogeneous boundary layers at heights or in locations where is not possible or convenient to install tower-based measurement systems, e.g. mountainous terrain, cities, wind farms, etc. Wind LiDARs are being extensively used for the measurement of averaged vertical wind profiles, but they can only successfully accomplish this task under the limiting conditions of flat terrain and horizontally homogeneous flow. Moreover, it has been shown that common scanning strategies introduce large systematic errors in turbulence measurements, regardless of the characteristics of the flow addressed. From the point of view of research, there exist a variety of techniques and scanning strategies to estimate different turbulence quantities but most of them rely in the combination of raw measurements with atmospheric models. Most of those models are only valid under the assumption of horizontal homogeneity. The limitations stated above can be overcome by a new triple LiDAR technique which uses simultaneous measurements from three intersecting Doppler wind LiDARs. It allows for the reconstruction of the three-dimensional velocity vector in time as well as local velocity gradients without the need of any turbulence model and with minimal assumptions [EGU2013-9670]. The triple LiDAR technique has been applied to the study of the flow over the campus of EPFL in Lausanne (Switzerland). The results show the potential of the technique for the measurement of turbulence in highly complex boundary layer flows. The technique is particularly useful for micrometeorology and wind engineering studies.
Vishwakarma, Aruna Prashanth; Bondarde, Prashant Arjun; Patil, Sudha Bhimangouda; Dodamani, Arun Suresh; Vishwakarma, Prashanth Yachrappa; Mujawar, Shoeb A
2017-01-01
Dental fear is a common, essential, and inevitable emotion that appears as a response to the stressful situation, which raises children's anxiety level, resulting in reduced demand for pediatric dental care. (1) To compare and evaluate the effectiveness of customized tell-play-do (TPD) technique with live modeling for behavior management of children. (2) To compare the behavioral modification techniques in managing the children during their dental visits. Ninety-eight children aged 5-7 years were enrolled in the study and randomly allocated into two groups. Phase I: first visit. Group I - children were conditioned to receive various dental procedures using live modeling followed by oral prophylaxis. Group II - TPD technique was introduced with customized playing dental objects followed by oral prophylaxis. Phase II: second visit. After 7 days interval, all the study subjects were subjected to rotary restorative treatment. Heart rate, Facial Image Scale (FIS), and Venham-6-point index were used before intervention, after intervention, and during dental procedure to quantify the anxious behavior. All 98 children after intervention underwent oral prophylaxis on first visit and rotary restorative treatment on second visit. The average pulse rate, FIS, and Venham scale scores were significantly lower among children who received TPD intervention when compared to those who received live modeling intervention. Unpaired t-test at 5% level of significance was considered as statistical significance. TPD is effective in reducing children's fear and anxiety about dental treatment, children enjoy playing with customized dental object. Thus, to promote adaptive behavior, TPD could be an alternate behavioral modification technique during pediatric dentistry.
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
NASA Astrophysics Data System (ADS)
Bilalic, Rusmir
A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.
NASA Astrophysics Data System (ADS)
Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.
2007-09-01
This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.
A nonlinear propagation model-based phase calibration technique for membrane hydrophones.
Cooling, Martin P; Humphrey, Victor F
2008-01-01
A technique for the phase calibration of membrane hydrophones in the frequency range up to 80 MHz is described. This is achieved by comparing measurements and numerical simulation of a nonlinearly distorted test field. The field prediction is obtained using a finite-difference model that solves the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation in the frequency domain. The measurements are made in the far field of a 3.5 MHz focusing circular transducer in which it is demonstrated that, for the high drive level used, spatial averaging effects due to the hydrophone's finite-receive area are negligible. The method provides a phase calibration of the hydrophone under test without the need for a device serving as a phase response reference, but it requires prior knowledge of the amplitude sensitivity at the fundamental frequency. The technique is demonstrated using a 50-microm thick bilaminar membrane hydrophone, for which the results obtained show functional agreement with predictions of a hydrophone response model. Further validation of the results is obtained by application of the response to the measurement of the high amplitude waveforms generated by a modern biomedical ultrasonic imaging system. It is demonstrated that full deconvolution of the calculated complex frequency response of a nonideal hydrophone results in physically realistic measurements of the transmitted waveforms.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
Bah, Mamadou T; Nair, Prasanth B; Browne, Martin
2009-12-01
Finite element (FE) analysis of the effect of implant positioning on the performance of cementless total hip replacements (THRs) requires the generation of multiple meshes to account for positioning variability. This process can be labour intensive and time consuming as CAD operations are needed each time a specific orientation is to be analysed. In the present work, a mesh morphing technique is developed to automate the model generation process. The volume mesh of a baseline femur with the implant in a nominal position is deformed as the prosthesis location is varied. A virtual deformation field, obtained by solving a linear elasticity problem with appropriate boundary conditions, is applied. The effectiveness of the technique is evaluated using two metrics: the percentages of morphed elements exceeding an aspect ratio of 20 and an angle of 165 degrees between the adjacent edges of each tetrahedron. Results show that for 100 different implant positions, the first and second metrics never exceed 3% and 3.5%, respectively. To further validate the proposed technique, FE contact analyses are conducted using three selected morphed models to predict the strain distribution in the bone and the implant micromotion under joint and muscle loading. The entire bone strain distribution is well captured and both percentages of bone volume with strain exceeding 0.7% and bone average strains are accurately computed. The results generated from the morphed mesh models correlate well with those for models generated from scratch, increasing confidence in the methodology. This morphing technique forms an accurate and efficient basis for FE based implant orientation and stability analysis of cementless hip replacements.
Ridenour, Ty A; Pineo, Thomas Z; Maldonado Molina, Mildred M; Hassmiller Lich, Kristen
2013-06-01
Psychosocial prevention research lacks evidence from intensive within-person lines of research to understand idiographic processes related to development and response to intervention. Such data could be used to fill gaps in the literature and expand the study design options for prevention researchers, including lower-cost yet rigorous studies (e.g., for program evaluations), pilot studies, designs to test programs for low prevalence outcomes, selective/indicated/adaptive intervention research, and understanding of differential response to programs. This study compared three competing analytic strategies designed for this type of research: autoregressive moving average, mixed model trajectory analysis, and P-technique. Illustrative time series data were from a pilot study of an intervention for nursing home residents with diabetes (N = 4) designed to improve control of blood glucose. A within-person, intermittent baseline design was used. Intervention effects were detected using each strategy for the aggregated sample and for individual patients. The P-technique model most closely replicated observed glucose levels. ARIMA and P-technique models were most similar in terms of estimated intervention effects and modeled glucose levels. However, ARIMA and P-technique also were more sensitive to missing data, outliers and number of observations. Statistical testing suggested that results generalize both to other persons as well as to idiographic, longitudinal processes. This study demonstrated the potential contributions of idiographic research in prevention science as well as the need for simulation studies to delineate the research circumstances when each analytic approach is optimal for deriving the correct parameter estimates.
Pineo, Thomas Z.; Maldonado Molina, Mildred M.; Lich, Kristen Hassmiller
2013-01-01
Psychosocial prevention research lacks evidence from intensive within-person lines of research to understand idiographic processes related to development and response to intervention. Such data could be used to fill gaps in the literature and expand the study design options for prevention researchers, including lower-cost yet rigorous studies (e.g., for program evaluations), pilot studies, designs to test programs for low prevalence outcomes, selective/indicated/ adaptive intervention research, and understanding of differential response to programs. This study compared three competing analytic strategies designed for this type of research: autoregressive moving average, mixed model trajectory analysis, and P-technique. Illustrative time series data were from a pilot study of an intervention for nursing home residents with diabetes (N=4) designed to improve control of blood glucose. A within-person, intermittent baseline design was used. Intervention effects were detected using each strategy for the aggregated sample and for individual patients. The P-technique model most closely replicated observed glucose levels. ARIMA and P-technique models were most similar in terms of estimated intervention effects and modeled glucose levels. However, ARIMA and P-technique also were more sensitive to missing data, outliers and number of observations. Statistical testing suggested that results generalize both to other persons as well as to idiographic, longitudinal processes. This study demonstrated the potential contributions of idiographic research in prevention science as well as the need for simulation studies to delineate the research circumstances when each analytic approach is optimal for deriving the correct parameter estimates. PMID:23299558
Tan, Germaine Xin Yi; Jamil, Muhammad; Tee, Nicole Gui Zhen; Zhong, Liang; Yap, Choon Hwai
2015-11-01
Recent animal studies have provided evidence that prenatal blood flow fluid mechanics may play a role in the pathogenesis of congenital cardiovascular malformations. To further these researches, it is important to have an imaging technique for small animal embryos with sufficient resolution to support computational fluid dynamics studies, and that is also non-invasive and non-destructive to allow for subject-specific, longitudinal studies. In the current study, we developed such a technique, based on ultrasound biomicroscopy scans on chick embryos. Our technique included a motion cancelation algorithm to negate embryonic body motion, a temporal averaging algorithm to differentiate blood spaces from tissue spaces, and 3D reconstruction of blood volumes in the embryo. The accuracy of the reconstructed models was validated with direct stereoscopic measurements. A computational fluid dynamics simulation was performed to model fluid flow in the generated construct of a Hamburger-Hamilton (HH) stage 27 embryo. Simulation results showed that there were divergent streamlines and a low shear region at the carotid duct, which may be linked to the carotid duct's eventual regression and disappearance by HH stage 34. We show that our technique has sufficient resolution to produce accurate geometries for computational fluid dynamics simulations to quantify embryonic cardiovascular fluid mechanics.
Gao, Fei; Wang, Guo-Bao; Xiang, Zhan-Wang; Yang, Bin; Xue, Jing-Bing; Mo, Zhi-Qiang; Zhong, Zhi-Hui; Zhang, Tao; Zhang, Fu-Jun; Fan, Wei-Jun
2016-05-03
This study sought to prospectively evaluate the feasibility and safety of a preoperative mathematic model for computed tomographic(CT) guided microwave(MW) ablation treatment of hepatic dome tumors. This mathematic model was a regular cylinder quantifying appropriate puncture routes from the bottom up. A total of 103 patients with hepatic dome tumors were enrolled and randomly divided into 2 groups based on whether this model was used or not: Group A (using the model; n = 43) versus Group B (not using the model; n = 60). All tumors were treated by CT-guided MW ablation and follow-up contrast CT were reviewed. The average number of times for successful puncture, average ablation time, and incidence of right shoulder pain were less in Group A than Group B (1.4 vs. 2.5, P = 0.001; 8.8 vs. 11.1 minutes, P = 0.003; and 4.7% vs. 20%, P = 0.039). The technical success rate was higher in Group A than Group B (97.7% vs. 85.0%, P = 0.032). There were no significant differences between the two groups in primary and secondary technique efficacy rates (97.7% vs. 88.3%, P = 0.081; 90.0% vs. 72.7%, P = 0.314). No major complications occurred in both groups. The mathematic model of regular cylinder is feasible and safe for CT-guided MW ablation in treating hepatic dome tumors.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang
2016-08-16
To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Ghamarian, Iman
Nanocrystalline metallic materials have the potential to exhibit outstanding performance which leads to their usage in challenging applications such as coatings and biomedical implant devices. To optimize the performance of nanocrystalline metallic materials according to the desired applications, it is important to have a decent understanding of the structure, processing and properties of these materials. Various efforts have been made to correlate microstructure and properties of nanocrystalline metallic materials. Based on these research activities, it is noticed that microstructure and defects (e.g., dislocations and grain boundaries) play a key role in the behavior of these materials. Therefore, it is of great importance to establish methods to quantitatively study microstructures, defects and their interactions in nanocrystalline metallic materials. Since the mechanisms controlling the properties of nanocrystalline metallic materials occur at a very small length scale, it is fairly difficult to study them. Unfortunately, most of the characterization techniques used to explore these materials do not have the high enough spatial resolution required for the characterization of these materials. For instance, by applying complex profile-fitting algorithms to X-ray diffraction patterns, it is possible to get an estimation of the average grain size and the average dislocation density within a relatively large area. However, these average values are not enough for developing meticulous phenomenological models which are able to correlate microstructure and properties of nanocrystalline metallic materials. As another example, electron backscatter diffraction technique also cannot be used widely in the characterization of these materials due to problems such as relative poor spatial resolution (which is 90 nm) and the degradation of Kikuchi diffraction patterns in severely deformed nano-size grain metallic materials. In this study, ASTAR(TM)/precession electron diffraction is introduced as a relatively new orientation microscopy technique to characterize defects (e.g., geometrically necessary dislocations and grain boundaries) in challenging nanocrystalline metallic materials. The capability of this characterization technique to quantitatively determine the dislocation density distributions of geometrically necessary dislocations in severely deformed metallic materials is assessed. Based on the developed method, it is possible to determine the distributions and accumulations of dislocations with respect to the nearest grain boundaries and triple junctions. Also, the competency of this technique to study the grain boundary character distributions of nanocrystalline metallic materials is presented.
O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A
To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Wijenayake, Udaya; Park, Soon-Yong
2017-01-01
Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468
Diffuse Scattering from Lead-Containing Ferroelectric Perovskite Oxides
Goossens, D. J.
2013-01-01
Ferroelectric materials rely on some type of non-centrosymmetric displacement correlations to give rise to a macroscopic polarisation. These displacements can show short-range order (SRO) that is reflective of the local chemistry, and so studying it reveals important information about how the structure gives rise to the technologically useful properties. A key means of exploring this SRO is diffuse scattering. Conventional structural studies use Bragg peak intensitiesto determine the average structure. In a single crystal diffuse scattering (SCDS) experiment, the coherent scattered intensity is measured at non-integer Miller indices, and can be used to examine the population of local configurations. Thismore » is because the diffuse scattering is sensitive to two-body averages, whereas the Bragg intensity gives single-body averages. This review outlines key results of SCDS studies on several materials and explores the similarities and differences in their diffuse scattering. Random strains are considered, as are models based on a phonon-like picture or a more local-chemistry oriented picture. Limitations of the technique are discussed.« less
NASA Astrophysics Data System (ADS)
Findlay, R. P.; Dimbylow, P. J.
2008-05-01
If an electromagnetic field is incident normally onto a perfectly conducting ground plane, the field is reflected back into the domain. This produces a standing wave above the ground plane. If a person is present within the domain, absorption of the field in the body may cause problems regarding compliance with electromagnetic guidelines. To investigate this, the whole-body averaged specific energy absorption rate (SAR), localised SAR and ankle currents in the voxel model NORMAN have been calculated for a variety of these exposures under grounded conditions. The results were normalised to the spatially averaged field, a technique used to determine a mean value for comparison with guidelines when the field varies along the height of the body. Additionally, the external field values required to produce basic restrictions for whole-body averaged SAR have been calculated. It was found that in all configurations studied, the ICNIRP reference levels and IEEE MPEs provided a conservative estimate of these restrictions.
Nakajima, Yujiro; Kadoya, Noriyuki; Kanai, Takayuki; Ito, Kengo; Sato, Kiyokazu; Dobashi, Suguru; Yamamoto, Takaya; Ishikawa, Yojiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2016-07-01
Irregular breathing can influence the outcome of 4D computed tomography imaging and cause artifacts. Visual biofeedback systems associated with a patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches) (representing simpler visual coaching techniques without a guiding waveform) are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching in reducing respiratory irregularities by comparing two respiratory management systems. We collected data from 11 healthy volunteers. Bar and wave models were used as visual biofeedback systems. Abches consisted of a respiratory indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. All coaching techniques improved respiratory variation, compared with free-breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86 and 0.98 ± 0.47 mm for free-breathing, Abches, bar model and wave model, respectively. Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18 and 0.17 ± 0.05 s for free-breathing, Abches, bar model and wave model, respectively. The average reduction in displacement and period RMSE compared with the wave model were 27% and 47%, respectively. For variation in both displacement and period, wave model was superior to the other techniques. Our results showed that visual biofeedback combined with a wave model could potentially provide clinical benefits in respiratory management, although all techniques were able to reduce respiratory irregularities. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
The study of PDF turbulence models in combustion
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
The accurate prediction of turbulent combustion is still beyond reach for today's computation techniques. It is the consensus of the combustion profession that the predictions of chemically reacting flow were poor if conventional turbulence models were used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature, pressure, and density produces excessively large errors. The probability density function (PDF) method is the only alternative at the present time that uses local instant values of the temperature, density, etc. in predicting chemical reaction rate, and thus it is the only viable approach for turbulent combustion calculations.
Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer
NASA Astrophysics Data System (ADS)
Schulte, Horst
2016-09-01
A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.
Design of Particulate-Reinforced Composite Materials
Muc, Aleksander; Barski, Marek
2018-01-01
A microstructure-based model is developed to study the effective anisotropic properties (magnetic, dielectric or thermal) of two-phase particle-filled composites. The Green’s function technique and the effective field method are used to theoretically derive the homogenized (averaged) properties for a representative volume element containing isolated inclusion and infinite, chain-structured particles. Those results are compared with the finite element approximations conducted for the assumed representative volume element. In addition, the Maxwell–Garnett model is retrieved as a special case when particle interactions are not considered. We also give some information on the optimal design of the effective anisotropic properties taking into account the shape of magnetic particles. PMID:29401678
Study of Periodical Flow Heat Transfer in an Internal Combustion Engine
NASA Astrophysics Data System (ADS)
Luo, Xi
In-cylinder heat transfer is one of the most critical physical behaviors which has a direct influence on engine out emission and thermal efficiency for IC engine. In-cylinder wall temperature has to be precisely controlled to achieve high efficiency and low emission. However, this cannot be done without knowing gas-to-wall heat flux. This study reports on the development of a technique suitable for engine in-cylinder surface temperature measurement, as the traditional method is "hard to reach." A laser induced phosphorescence technique was used to study in-cylinder wall temperature effects on engine out unburned hydrocarbons during the engine transitional period (warm up). A linear correlation was found between the cylinder wall surface temperature and the unburned hydrocarbons at mediate and high charge densities. At low charge density, no clear correlation was observed because of miss-fire events. A new auto background correction infrared (IR) diagnostic was developed to measure the instantaneous in-cylinder surface temperature at 0.1 CAD resolution. A numerical mechanism was designed to suppress relatively low-frequency background noise and provide an accurate in-cylinder surface temperature measurements with an error of less than 1.4% inside the IC engine. In addition, a proposed optical coating reduced time delay errors by 50% compared to more conventional thermocouple techniques. A new cycle-averaged Res number was developed for an IC engine to capture the characteristics of engine flow. Comparison and scaling between different engine flow parameters are available by matching the averaged Res number. From experimental results, the engine flow motion was classified as intermittently turbulent, and it is different from the original fully developed turbulent assumption, which has previously been used in almost all engine simulations. The intermittent turbulence could have a great impact on engine heat transfer because of the transitional turbulence effect. Engine 3D CFD model further proves the existence of transitional turbulence flow. A new multi zone heat transfer model is proposed for IC engines only. The model includes pressure work effects and improved heat transfer prediction compared to the standard Law of the wall model.
A short-term ensemble wind speed forecasting system for wind power applications
NASA Astrophysics Data System (ADS)
Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.
2011-12-01
This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.
Liu, Xin
2014-01-01
This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.
Large-eddy simulation of flow in a plane, asymmetric diffuser
NASA Technical Reports Server (NTRS)
Kaltenbach, Hans-Jakob
1993-01-01
Recent improvements in subgrid-scale modeling as well as increases in computer power make it feasible to investigate flows using large-eddy simulation (LES) which have been traditionally studied with techniques based on Reynolds averaging. However, LES has not yet been applied to many flows of immediate technical interest. Preliminary results from LES of a plane diffuser flow are described. The long term goal of this work is to investigate flow separation as well as separation control in ducts and ramp-like geometries.
First-Principles Study of Interfacial Boundaries in Ni-Ni3AL (Postprint)
2014-05-01
1,2] and extensions thereof. The experimental technique is difficult as accurate measurements of average particle size over time is challeng- ing...8]. There is significant scatter in the measured values of r and the result is strongly dependent on what model is used to describe the particle size ...binary Ni– Al alloys. This study focused on the evolution of particle size and IFB width of during annealing at two tempera- tures (823 and 873 K) for
SVD analysis of Aura TES spectral residuals
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.
2005-01-01
Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.
Detonation Reaction Zones in Condensed Explosives
NASA Astrophysics Data System (ADS)
Tarver, Craig M.
2006-07-01
Experimental measurements using nanosecond time resolved embedded gauges and laser interferometric techniques, combined with Non-Equilibrium Zeldovich - von Neumann - Doling (NEZND) theory and Ignition and Growth reactive flow hydrodynamic modeling, have revealed the average pressure/particle velocity states attained in reaction zones of self-sustaining detonation waves in several solid and liquid explosives. The time durations of these reaction zone processes are discussed for explosives based on pentaerythritol tetranitrate (PETN), nitromethane, octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), triaminitrinitrobenzene(TATB) and trinitrotoluene (TNT).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stratz, S. Adam; Jones, Steven J.; Mullen, Austin D.
Newly-established adsorption enthalpy and entropy values of 12 lanthanide hexafluoroacetylacetonates, denoted Ln[hfac] 4, along with the experimental and theoretical methodology used to obtain these values, are presented for the first time. The results of this work can be used in conjunction with theoretical modeling techniques to optimize a large-scale gas-phase separation experiment using isothermal chromatography. The results to date indicate average adsorption enthalpy and entropy values of the 12 Ln[hfac] 4 complexes ranging from -33 to -139 kJ/mol K and -299 to -557 J/mol, respectively.
Flow Mapping in a Gas-Solid Riser via Computer Automated Radioactive Particle Tracking (CARPT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muthanna Al-Dahhan; Milorad P. Dudukovic; Satish Bhusarapu
2005-06-04
Statement of the Problem: Developing and disseminating a general and experimentally validated model for turbulent multiphase fluid dynamics suitable for engineering design purposes in industrial scale applications of riser reactors and pneumatic conveying, require collecting reliable data on solids trajectories, velocities ? averaged and instantaneous, solids holdup distribution and solids fluxes in the riser as a function of operating conditions. Such data are currently not available on the same system. Multiphase Fluid Dynamics Research Consortium (MFDRC) was established to address these issues on a chosen example of circulating fluidized bed (CFB) reactor, which is widely used in petroleum and chemicalmore » industry including coal combustion. This project addresses the problem of lacking reliable data to advance CFB technology. Project Objectives: The objective of this project is to advance the understanding of the solids flow pattern and mixing in a well-developed flow region of a gas-solid riser, operated at different gas flow rates and solids loading using the state-of-the-art non-intrusive measurements. This work creates an insight and reliable database for local solids fluid-dynamic quantities in a pilot-plant scale CFB, which can then be used to validate/develop phenomenological models for the riser. This study also attempts to provide benchmark data for validation of Computational Fluid Dynamic (CFD) codes and their current closures. Technical Approach: Non-Invasive Computer Automated Radioactive Particle Tracking (CARPT) technique provides complete Eulerian solids flow field (time average velocity map and various turbulence parameters such as the Reynolds stresses, turbulent kinetic energy, and eddy diffusivities). It also gives directly the Lagrangian information of solids flow and yields the true solids residence time distribution (RTD). Another radiation based technique, Computed Tomography (CT) yields detailed time averaged local holdup profiles at various planes. Together, these two techniques can provide the needed local solids flow dynamic information for the same setup under identical operating conditions, and the data obtained can be used as a benchmark for development, and refinement of the appropriate riser models. For the above reasons these two techniques were implemented in this study on a fully developed section of the riser. To derive the global mixing information in the riser, accurate solids RTD is needed and was obtained by monitoring the entry and exit of a single radioactive tracer. Other global parameters such as Cycle Time Distribution (CTD), overall solids holdup in the riser, solids recycle percentage at the bottom section of the riser were evaluated from different solids travel time distributions. Besides, to measure accurately and in-situ the overall solids mass flux, a novel method was applied.« less
VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)
NASA Astrophysics Data System (ADS)
Jenkins, J. S.; Tuomi, M.
2017-05-01
We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
David, Ingrid; Bouvier, Frédéric; Ricard, Edmond; Ruesche, Julien; Weisbecker, Jean-Louis
2013-09-30
The pre-weaning growth of lambs, an important component of meat production, depends on maternal and direct effects. These effects cannot be observed directly and models used to study pre-weaning growth assume that they are additive. However, it is reasonable to suggest that the influence of direct effects on growth may differ depending on the value of maternal effects i.e. an interaction may exist between the two components. To test this hypothesis, an experiment was carried out in Romane sheep in order to obtain observations of maternal phenotypic effects (milk yield and milk quality) and pre-weaning growth of the lambs. The experiment consisted of mating ewes that had markedly different maternal genetic effects with rams that contributed very different genetic effects in four replicates of a 3 × 2 factorial plan. Milk yield was measured using the lamb suckling weight differential technique and milk composition (fat and protein contents) was determined by infrared spectroscopy at 15, 21 and 35 days after lambing. Lambs were weighed at birth and then at 15, 21 and 35 days. An interaction between genotype (of the lamb) and environment (milk yield and quality) for average daily gain was tested using a restricted likelihood ratio test, comparing a linear reaction norm model (interaction model) to a classical additive model (no interaction model). A total of 1284 weights of 442 lambs born from 166 different ewes were analysed. On average, the ewes produced 2.3 ± 0.8 L milk per day. The average protein and fat contents were 50 ± 4 g/L and 60 ± 18 g/L, respectively. The mean 0-35 day average daily gain was 207 ± 46 g/d. Results of the restricted likelihood ratio tests did not highlight any significant interactions between the genotype of the lambs and milk production of the ewe. Our results support the hypothesis of additivity of maternal and direct effects on growth that is currently applied in genetic evaluation models.
NASA Technical Reports Server (NTRS)
Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.
2016-01-01
Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burger, D.E.
1979-11-01
The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less
A model describing vestibular detection of body sway motion.
NASA Technical Reports Server (NTRS)
Nashner, L. M.
1971-01-01
An experimental technique was developed which facilitated the formulation of a quantitative model describing vestibular detection of body sway motion in a postural response mode. All cues, except vestibular ones, which gave a subject an indication that he was beginning to sway, were eliminated using a specially designed two-degree-of-freedom platform; body sway was then induced and resulting compensatory responses at the ankle joints measured. Hybrid simulation compared the experimental results with models of the semicircular canals and utricular otolith receptors. Dynamic characteristics of the resulting canal model compared closely with characteristics of models which describe eye movement and subjective responses to body rotational motions. The average threshold level, in the postural response mode, however, was considerably lower. Analysis indicated that the otoliths probably play no role in the initial detection of body sway motion.
About the coupling of turbulence closure models with averaged Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Vandromme, D.; Ha Minh, H.
1986-01-01
The MacCormack implicit predictor-corrector model (1981) for numerical solution of the coupled Navier-Stokes equations for turbulent flows is extended to nonconservative multiequation turbulence models, as well as the inclusion of second-order Reynolds stress turbulence closure. A scalar effective pressure turbulent contribution to the pressure field is defined to approximate the effects of the Reynolds stress in strongly sheared flows. The Jacobian matrices of the transport equations are diagonalized to reduce the required computer memory and run time. Techniques are defined for including turbulence in the diagonalization. Application of the method is demonstrated with solutions generated for transonic nozzle flow and for the interaction between a supersonic flat plate boundary layer and a 12 deg compression-expansion ramp.
Aerodynamic Characteristics of High Speed Trains under Cross Wind Conditions
NASA Astrophysics Data System (ADS)
Chen, W.; Wu, S. P.; Zhang, Y.
2011-09-01
Numerical simulation for the two models in cross-wind was carried out in this paper. The three-dimensional compressible Reynolds-averaged Navier-Stokes equations(RANS), combined with the standard k-ɛ turbulence model, were solved on multi-block hybrid grids by second order upwind finite volume technique. The impact of fairing on aerodynamic characteristics of the train models was analyzed. It is shown that, the flow separates on the fairing and a strong vortex is generated, the pressure on the upper middle car decreases dramatically, which leads to a large lift force. The fairing changes the basic patterns around the trains. In addition, formulas of the coefficient of aerodynamic force at small yaw angles up to 24° were expressed.
Upscaling soil saturated hydraulic conductivity from pore throat characteristics
NASA Astrophysics Data System (ADS)
Ghanbarian, Behzad; Hunt, Allen G.; Skaggs, Todd H.; Jarvis, Nicholas
2017-06-01
Upscaling and/or estimating saturated hydraulic conductivity Ksat at the core scale from microscopic/macroscopic soil characteristics has been actively under investigation in the hydrology and soil physics communities for several decades. Numerous models have been developed based on different approaches, such as the bundle of capillary tubes model, pedotransfer functions, etc. In this study, we apply concepts from critical path analysis, an upscaling technique first developed in the physics literature, to estimate saturated hydraulic conductivity at the core scale from microscopic pore throat characteristics reflected in capillary pressure data. With this new model, we find Ksat estimations to be within a factor of 3 of the average measured saturated hydraulic conductivities reported by Rawls et al. (1982) for the eleven USDA soil texture classes.
Parameter estimation of an ARMA model for river flow forecasting using goal programming
NASA Astrophysics Data System (ADS)
Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene
2006-11-01
SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.
Intrinsic coincident linear polarimetry using stacked organic photovoltaics.
Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W
2016-06-27
Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.
Numerical and Experimental Study of Wake Redirection Techniques in a Boundary Layer Wind Tunnel
NASA Astrophysics Data System (ADS)
Wang, J.; Foley, S.; Nanos, E. M.; Yu, T.; Campagnolo, F.; Bottasso, C. L.; Zanotti, A.; Croce, A.
2017-05-01
The aim of the present paper is to validate a wind farm LES framework in the context of two distinct wake redirection techniques: yaw misalignment and individual cyclic pitch control. A test campaign was conducted using scaled wind turbine models in a boundary layer wind tunnel, where both particle image velocimetry and hot-wire thermo anemometers were used to obtain high quality measurements of the downstream flow. A LiDAR system was also employed to determine the non-uniformity of the inflow velocity field. A high-fidelity large-eddy simulation lifting-line model was used to simulate the aerodynamic behavior of the system, including the geometry of the wind turbine nacelle and tower. A tuning-free Lagrangian scale-dependent dynamic approach was adopted to improve the sub-grid scale modeling. Comparisons with experimental measurements are used to systematically validate the simulations. The LES results are in good agreement with the PIV and hot-wire data in terms of time-averaged wake profiles, turbulence intensity and Reynolds shear stresses. Discrepancies are also highlighted, to guide future improvements.
Robust model-based 3d/3D fusion using sparse matching for minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2013-01-01
Classical surgery is being disrupted by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm CT and C-arm fluoroscopy are routinely used for intra-operative guidance. However, intra-operative modalities have limited image quality of the soft tissue and a reliable assessment of the cardiac anatomy can only be made by injecting contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a novel sparse matching approach for fusing high quality pre-operative CT and non-contrasted, non-gated intra-operative C-arm CT by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the pre-operative CT and mapped to the intra-operative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments demonstrate that our model-based fusion approach has an average execution time of 2.9 s, while the accuracy lies within expert user confidence intervals.
NASA Technical Reports Server (NTRS)
Landahl, Marten T.
1988-01-01
Experiments on wall-bounded shear flows (channel flows and boundary layers) have indicated that the turbulence in the region close to the wall exhibits a characteristic intermittently formed pattern of coherent structures. For a quantitative study of coherent structures it is necessary to make use of conditional sampling. One particularly successful sampling technique is the Variable Integration Time Averaging technique (VITA) first explored by Blackwelder and Kaplan (1976). In this, an event is assumed to occur when the short time variance exceeds a certain threshold multiple of the mean square signal. The analysis presented removes some assumptions in the earlier models in that the effects of pressure and viscosity are taken into account in an approximation based on the assumption that the near-wall structures are highly elongated in the streamwise direction. The appropriateness of this is suggested by the observations but is also self consistent with the results of the model which show that the streamwise dimension of the structure grows with time, so that the approximation should improve with the age of the structure.
Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musgrove, Cameron
For synthetic aperture radars radio frequency interference from sources external to the radar system and techniques to mitigate the interference can degrade the quality of the image products. Usually the radar system designer will try to balance the amount of mitigation for an acceptable amount of interference to optimize the image quality. This dissertation examines the effect of interference mitigation upon coherent data products of fine resolution, high frequency synthetic aperture radars using stretch processing. Novel interference mitigation techniques are introduced that operate on single or multiple apertures of data that increase average coherence compared to existing techniques. New metricsmore » are applied to evaluate multiple mitigation techniques for image quality and average coherence. The underlying mechanism for interference mitigation techniques that affect coherence is revealed.« less
Pulverman, Carey S; Hixon, J Gregory; Meston, Cindy M
2015-10-01
Based on analytic techniques that collapse data into a single average value, it has been reported that women lack category specificity and show genital sexual arousal to a large range of sexual stimuli including those that both match and do not match their self-reported sexual interests. These findings may be a methodological artifact of the way in which data are analyzed. This study examined whether using an analytic technique that models data over time would yield different results. Across two studies, heterosexual (N = 19) and lesbian (N = 14) women viewed erotic films featuring heterosexual, lesbian, and gay male couples, respectively, as their physiological sexual arousal was assessed with vaginal photoplethysmography. Data analysis with traditional methods comparing average genital arousal between films failed to detect specificity of genital arousal for either group. When data were analyzed with smoothing regression splines and a within-subjects approach, both heterosexual and lesbian women demonstrated different patterns of genital sexual arousal to the different types of erotic films, suggesting that sophisticated statistical techniques may be necessary to more fully understand women's genital sexual arousal response. Heterosexual women showed category-specific genital sexual arousal. Lesbian women showed higher arousal to the heterosexual film than the other films. However, within subjects, lesbian women showed significantly different arousal responses suggesting that lesbian women's genital arousal discriminates between different categories of stimuli at the individual level. Implications for the future use of vaginal photoplethysmography as a diagnostic tool of sexual preferences in clinical and forensic settings are discussed. © 2015 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.
Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V
2016-01-01
The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
Using queuing theory and simulation model to optimize hospital pharmacy performance.
Bahadori, Mohammadkarim; Mohammadnejhad, Seyed Mohsen; Ravangard, Ramin; Teymourzadeh, Ehsan
2014-03-01
Hospital pharmacy is responsible for controlling and monitoring the medication use process and ensures the timely access to safe, effective and economical use of drugs and medicines for patients and hospital staff. This study aimed to optimize the management of studied outpatient pharmacy by developing suitable queuing theory and simulation technique. A descriptive-analytical study conducted in a military hospital in Iran, Tehran in 2013. A sample of 220 patients referred to the outpatient pharmacy of the hospital in two shifts, morning and evening, was selected to collect the necessary data to determine the arrival rate, service rate, and other data needed to calculate the patients flow and queuing network performance variables. After the initial analysis of collected data using the software SPSS 18, the pharmacy queuing network performance indicators were calculated for both shifts. Then, based on collected data and to provide appropriate solutions, the queuing system of current situation for both shifts was modeled and simulated using the software ARENA 12 and 4 scenarios were explored. Results showed that the queue characteristics of the studied pharmacy during the situation analysis were very undesirable in both morning and evening shifts. The average numbers of patients in the pharmacy were 19.21 and 14.66 in the morning and evening, respectively. The average times spent in the system by clients were 39 minutes in the morning and 35 minutes in the evening. The system utilization in the morning and evening were, respectively, 25% and 21%. The simulation results showed that reducing the staff in the morning from 2 to 1 in the receiving prescriptions stage didn't change the queue performance indicators. Increasing one staff in filling prescription drugs could cause a decrease of 10 persons in the average queue length and 18 minutes and 14 seconds in the average waiting time. On the other hand, simulation results showed that in the evening, decreasing the staff from 2 to 1 in the delivery of prescription drugs, changed the queue performance indicators very little. Increasing a staff to fill prescription drugs could cause a decrease of 5 persons in the average queue length and 8 minutes and 44 seconds in the average waiting time. The patients' waiting times and the number of patients waiting to receive services in both shifts could be reduced by using multitasking persons and reallocating them to the time-consuming stage of filling prescriptions, using queuing theory and simulation techniques.
Using Queuing Theory and Simulation Model to Optimize Hospital Pharmacy Performance
Bahadori, Mohammadkarim; Mohammadnejhad, Seyed Mohsen; Ravangard, Ramin; Teymourzadeh, Ehsan
2014-01-01
Background: Hospital pharmacy is responsible for controlling and monitoring the medication use process and ensures the timely access to safe, effective and economical use of drugs and medicines for patients and hospital staff. Objectives: This study aimed to optimize the management of studied outpatient pharmacy by developing suitable queuing theory and simulation technique. Patients and Methods: A descriptive-analytical study conducted in a military hospital in Iran, Tehran in 2013. A sample of 220 patients referred to the outpatient pharmacy of the hospital in two shifts, morning and evening, was selected to collect the necessary data to determine the arrival rate, service rate, and other data needed to calculate the patients flow and queuing network performance variables. After the initial analysis of collected data using the software SPSS 18, the pharmacy queuing network performance indicators were calculated for both shifts. Then, based on collected data and to provide appropriate solutions, the queuing system of current situation for both shifts was modeled and simulated using the software ARENA 12 and 4 scenarios were explored. Results: Results showed that the queue characteristics of the studied pharmacy during the situation analysis were very undesirable in both morning and evening shifts. The average numbers of patients in the pharmacy were 19.21 and 14.66 in the morning and evening, respectively. The average times spent in the system by clients were 39 minutes in the morning and 35 minutes in the evening. The system utilization in the morning and evening were, respectively, 25% and 21%. The simulation results showed that reducing the staff in the morning from 2 to 1 in the receiving prescriptions stage didn't change the queue performance indicators. Increasing one staff in filling prescription drugs could cause a decrease of 10 persons in the average queue length and 18 minutes and 14 seconds in the average waiting time. On the other hand, simulation results showed that in the evening, decreasing the staff from 2 to 1 in the delivery of prescription drugs, changed the queue performance indicators very little. Increasing a staff to fill prescription drugs could cause a decrease of 5 persons in the average queue length and 8 minutes and 44 seconds in the average waiting time. Conclusions: The patients' waiting times and the number of patients waiting to receive services in both shifts could be reduced by using multitasking persons and reallocating them to the time-consuming stage of filling prescriptions, using queuing theory and simulation techniques. PMID:24829791
Koenig, S C; Reister, C A; Schaub, J; Swope, R D; Ewert, D; Fanton, J W
1996-01-01
The Physiology Research Branch at Brooks AFB conducts both human and nonhuman primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to identify the particular mechanisms that invoke these responses. Primary investigative efforts in our nonhuman primate model require the determination of total peripheral resistance, systemic arterial compliance, and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. This study evaluated accuracy, linearity, biocompatability, and anatomical features of commercially available electromagnetic (EMF) and transit-time flow measurement techniques. Five rhesus monkeys were instrumented with either EMF (3 subjects) or transit-time (2 subjects) flow sensors encircling the proximal ascending aorta. Cardiac outputs computed from these transducers taken over ranges of 0.5 to 2.0 L/min were compared to values obtained using thermodilution. In vivo experiments demonstrated that the EMF probe produced an average error of 15% (r = .896) and 8.6% average linearity per reading, and the transit-time flow probe produced an average error of 6% (r = .955) and 5.3% average linearity per reading. Postoperative performance and biocompatability of the probes were maintained throughout the study. The transit-time sensors provided the advantages of greater accuracy, smaller size, and lighter weight than the EMF probes. In conclusion, the characteristic features and performance of the transit-time sensors were superior to those of the EMF sensors in this study.
NASA Technical Reports Server (NTRS)
Koenig, S. C.; Reister, C. A.; Schaub, J.; Swope, R. D.; Ewert, D.; Fanton, J. W.; Convertino, V. A. (Principal Investigator)
1996-01-01
The Physiology Research Branch at Brooks AFB conducts both human and nonhuman primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to identify the particular mechanisms that invoke these responses. Primary investigative efforts in our nonhuman primate model require the determination of total peripheral resistance, systemic arterial compliance, and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. This study evaluated accuracy, linearity, biocompatability, and anatomical features of commercially available electromagnetic (EMF) and transit-time flow measurement techniques. Five rhesus monkeys were instrumented with either EMF (3 subjects) or transit-time (2 subjects) flow sensors encircling the proximal ascending aorta. Cardiac outputs computed from these transducers taken over ranges of 0.5 to 2.0 L/min were compared to values obtained using thermodilution. In vivo experiments demonstrated that the EMF probe produced an average error of 15% (r = .896) and 8.6% average linearity per reading, and the transit-time flow probe produced an average error of 6% (r = .955) and 5.3% average linearity per reading. Postoperative performance and biocompatability of the probes were maintained throughout the study. The transit-time sensors provided the advantages of greater accuracy, smaller size, and lighter weight than the EMF probes. In conclusion, the characteristic features and performance of the transit-time sensors were superior to those of the EMF sensors in this study.
H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.
Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung
2015-07-03
Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.
H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus
Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung
2015-01-01
Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies. PMID:26151207
Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia
NASA Astrophysics Data System (ADS)
Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg
2013-03-01
Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, J
Purpose: To investigate the potential utility of in-line phase-contrast imaging (ILPCI) technique with synchrotron radiation in detecting early hepatocellular carcinoma and cavernous hemangioma of live using in vitro model system. Methods: Without contrast agents, three typical early hepatocellular carcinoma specimens and three typical cavernous hemangioma of live specimens were imaged using ILPCI. To quantitatively discriminate early hepatocellular carcinoma tissues and cavernous hemangioma tissues, the projection images texture feature based on gray level co-occurrence matrix (GLCM) were extracted. The texture parameters of energy, inertia, entropy, correlation, sum average, sum entropy, difference average, difference entropy and inverse difference moment, were obtained respectively.more » Results: In the ILPCI planar images of early hepatocellular carcinoma specimens, vessel trees were clearly visualized on the micrometer scale. Obvious distortion deformation was presented, and the vessel mostly appeared as a ‘dry stick’. Liver textures appeared not regularly. In the ILPCI planar images of cavernous hemangioma of live specimens, typical vessels had not been found compared with the early hepatocellular carcinoma planar images. The planar images of cavernous hemangioma of live specimens clearly displayed the dilated hepatic sinusoids with the diameter of less than 100 microns, but all of them were overlapped with each other. The texture parameters of energy, inertia, entropy, correlation, sum average, sum entropy, and difference average, showed a statistically significant between the two types specimens image (P<0.01), except the texture parameters of difference entropy and inverse difference moment(P>0.01). Conclusion: The results indicate that there are obvious changes in morphological levels including vessel structures and liver textures. The study proves that this imaging technique has a potential value in evaluating early hepatocellular carcinoma and cavernous hemangioma of live.« less
Sharma, Ity; Kaminski, George A.
2012-01-01
We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192
Kandari, Tushar; Aswal, Sunita; Prasad, Mukesh; Pant, Preeti; Bourai, A A; Ramola, R C
2016-10-01
In the present study, the measurements of indoor radon, thoron and their progeny concentrations have been carried out in the Rajpur region of Uttarakhand, Himalaya, India by using LR-115 solid-state nuclear track detector-based time-integrated techniques. The gas concentrations have been measured by single-entry pin-hole dosemeter technique, while for the progeny concentrations, deposition-based Direct Thoron and Radon Progeny Sensor technique has been used. The radiation doses due to the inhalation of radon, thoron and progeny have also been determined by using obtained concentrations of radon, thoron and their progeny in the study area. The average radon concentration varies from 75 to 123 Bq m -3 with an overall average of 89 Bq m -3 The average thoron concentration varies from 29 to 55 Bq m -3 with an overall average of 38 Bq m -3 The total annual effective dose received due to radon, thoron and their progeny varies from 2.4 to 4.1 mSv y -1 with an average of 2.9 mSv y -1 While the average equilibrium factor for radon and its progeny was found to be 0.39, for thoron and its progeny, it was 0.06. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
NASA Astrophysics Data System (ADS)
Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan
2016-06-01
In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.
Development of an automated energy audit protocol for office buildings
NASA Astrophysics Data System (ADS)
Deb, Chirag
This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.