NASA Technical Reports Server (NTRS)
Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.
1976-01-01
Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.
Comparison between a model-based and a conventional pyramid sensor reconstructor.
Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe
2007-08-20
A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional.
Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld
2007-08-01
In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.
ERIC Educational Resources Information Center
Srikoon, Sanit; Bunterm, Tassanee; Nethanomsak, Teerachai; Ngang, Tang Keow
2017-01-01
Purpose: The attention, working memory, and mood of learners are the most important abilities in the learning process. This study was concerned with the comparison of contextualized attention, working memory, and mood through a neurocognitive-based model (5P) and a conventional model (5E). It sought to examine the significant change in attention,…
NASA Astrophysics Data System (ADS)
Darma, I. K.
2018-01-01
This research is aimed at determining: 1) the differences of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) the differences of mathematical problem solving ability between the students facilitated with authentic and conventional assessment model, and 3) interaction effect between learning and assessment model on mathematical problem solving. The research was conducted in Bali State Polytechnic, using the 2x2 experiment factorial design. The samples of this research were 110 students. The data were collected using a theoretically and empirically-validated test. Instruments were validated by using Aiken’s approach of technique content validity and item analysis, and then analyzed using anova stylistic. The result of the analysis shows that the students facilitated with problem-based learning and authentic assessment models get the highest score average compared to the other students, both in the concept understanding and mathematical problem solving. The result of hypothesis test shows that, significantly: 1) there is difference of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) there is difference of mathematical problem solving ability between the students facilitated with authentic assessment model and conventional assessment model, and 3) there is interaction effect between learning model and assessment model on mathematical problem solving. In order to improve the effectiveness of mathematics learning, collaboration between problem-based learning model and authentic assessment model can be considered as one of learning models in class.
Novais, J L; Titchener-Hooker, N J; Hoare, M
2001-10-20
Time to market, cost effectiveness, and flexibility are key issues in today's biopharmaceutical market. Bioprocessing plants based on fully disposable, presterilized, and prevalidated components appear as an attractive alternative to conventional stainless steel plants, potentially allowing for shorter implementation times, smaller initial investments, and increased flexibility. To evaluate the economic case of such an alternative it was necessary to develop an appropriate costing model which allows an economic comparison between conventional and disposables-based engineering to be made. The production of an antibody fragment from an E. coli fermentation was used to provide a case study for both routes. The conventional bioprocessing option was costed through available models, which were then modified to account for the intrinsic differences observed in a disposables-based option. The outcome of the analysis indicates that the capital investment required for a disposables-based option is substantially reduced at less than 60% of that for a conventional option. The disposables-based running costs were evaluated as being 70% higher than those of the conventional equivalent. Despite this higher value, the net present value (NPV) of the disposables-based plant is positive and within 25% of that for the conventional plant. Sensitivity analysis performed on key variables indicated the robustness of the economic analysis presented. In particular a 9-month reduction in time to market arising from the adoption of a disposables-based approach, results in a NPV which is identical to that of the conventional option. Finally, the effect of any possible loss in yield resulting from the use of disposables was also examined. This had only a limited impact on the NPV: for example, a 50% lower yield in the disposable chromatography step results in a 10% reduction of the disposable NPV. The results provide the necessary framework for the economic comparison of disposables and conventional bioprocessing technologies. Copyright 2001 John Wiley & Sons, Inc.
Research on ionospheric tomography based on variable pixel height
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui
2016-05-01
A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.
NASA Astrophysics Data System (ADS)
Arumugam, S.; Ramakrishna, P.; Sangavi, S.
2018-02-01
Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.
Network-Based Community Brings forth Sustainable Society
NASA Astrophysics Data System (ADS)
Kikuchi, Toshiko
It has already been shown that an artificial society based on the three relations of social configuration (market, communal, and obligatory relations) functioning in balance with each other formed a sustainable society which the social reproduction is possible. In this artificial society model, communal relations exist in a network-based community with alternating members rather than a conventional community with cooperative mutual assistance practiced in some agricultural communities. In this paper, using the comparison between network-based communities with alternating members and conventional communities with fixed members, the significance of a network-based community is considered. In concrete terms, the difference in appearance rate for sustainable society, economic activity and asset inequality between network-based communities and conventional communities is analyzed. The appearance rate for a sustainable society of network-based community is higher than that of conventional community. Moreover, most of network-based communities had a larger total number of trade volume than conventional communities. But, the value of Gini coefficient in conventional community is smaller than that of network-based community. These results show that communal relations based on a network-based community is significant for the social reproduction and economic efficiency. However, in such an artificial society, the inequality is sacrificed.
Water-Based Pressure-Sensitive Paints
NASA Technical Reports Server (NTRS)
Jordan, Jeffrey D.; Watkins, A. Neal; Oglesby, Donald M.; Ingram, JoAnne L.
2006-01-01
Water-based pressure-sensitive paints (PSPs) have been invented as alternatives to conventional organic-solvent-based pressure-sensitive paints, which are used primarily for indicating distributions of air pressure on wind-tunnel models. Typically, PSPs are sprayed onto aerodynamic models after they have been mounted in wind tunnels. When conventional organic-solvent-based PSPs are used, this practice creates a problem of removing toxic fumes from inside the wind tunnels. The use of water-based PSPs eliminates this problem. The waterbased PSPs offer high performance as pressure indicators, plus all the advantages of common water-based paints (low toxicity, low concentrations of volatile organic compounds, and easy cleanup by use of water).
Long-Boyle, Janel R; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J; Dvorak, Christopher C
2015-04-01
Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared with conventional dose guidelines. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration at steady state) and implement a simple model-based tool for the initial dosing of busulfan in children undergoing hematopoietic cell transplantation. Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone hematopoietic cell transplantation with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the nonlinear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Modeling of busulfan time-concentration data indicates that busulfan clearance displays nonlinearity in children, decreasing up to approximately 20% between the concentrations of 250-2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan clearance were actual body weight and age. The percentage of individuals achieving a therapeutic concentration at steady state was significantly higher in subjects receiving initial doses based on the population PK model (81%) than in historical controls dosed on conventional guidelines (52%) (P = 0.02). When compared with the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults.
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Objective In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. Methods The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Results Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. Conclusion The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control. PMID:25546054
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control.
Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie
2017-11-01
While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.
Pérard, Marion; Mittring, Nadine; Schweiger, David; Kummer, Christopher; Witt, Claudia M
2015-06-09
Today, the increasing demand for complementary medicine encourages health care providers to adapt and create integrative medicine departments or services within clinics. However, because of their differing philosophies, historical development, and settings, merging the partners (conventional and complementary medicine) is often difficult. It is necessary to understand the similarities and differences in both cultures to support a successful and sustainable integration. The aim of this project was to develop a theoretical model and practical steps that are based on theories from mergers in business to facilitate the implementation of an integrative medicine department. Based on a literature search and expert discussions, the cultures were described and model domains were developed. These were applied to two case studies to develop the final model. Furthermore, a checklist with practical steps was devised. Conventional medicine and complementary medicine have developed different corporate cultures. The final model, which should help to foster integration by bridging between these cultures, is based on four overall aspects: culture, strategy, organizational tools and outcomes. Each culture is represented by three dimensions in the model: corporate philosophy (core and identity of the medicine and the clinic), patient (all characteristics of the professional team's contact with the patient), and professional team (the characteristics of the interactions within the professional team). Overall, corporate culture differs between conventional and complementary medicine; when planning the implementation of an integrative medicine department, the developed model and the checklist can support better integration.
Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike
2011-06-28
Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
2011-01-01
Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504
Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo
2013-12-01
To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
NASA Astrophysics Data System (ADS)
Xu, Zhicheng; Yuan, Bo; Zhang, Fuqiang
2018-06-01
In this paper, a power supply optimization model is proposed. The model takes the minimum fossil energy consumption as the target, considering the output characteristics of the conventional power supply and the renewable power supply. The optimal capacity ratio of wind-solar in the power supply under various constraints is calculated, and the interrelation between conventional power supply and renewable energy is analyzed in the system of high proportion renewable energy integration. Using the model, we can provide scientific guidance for the coordinated and orderly development of renewable energy and conventional power sources.
Model for large magnetoresistance effect in p–n junctions
NASA Astrophysics Data System (ADS)
Cao, Yang; Yang, Dezheng; Si, Mingsu; Shi, Huigang; Xue, Desheng
2018-06-01
We present a simple model based on the classic Shockley model to explain the magnetotransport in nonmagnetic p–n junctions. Under a magnetic field, the evaluation of the carrier to compensate Lorentz force establishes the necessary space-charge region distribution. The calculated current–voltage (I–V) characteristics under various magnetic fields demonstrate that the conventional nonmagnetic p–n junction can exhibit an extremely large magnetoresistance effect, which is even larger than that in magnetic materials. Because the large magnetoresistance effect that we discussed is based on the conventional p–n junction device, our model provides new insight into the development of semiconductor magnetoelectronics.
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
Hydrography synthesis using LANDSAT remote sensing and the SCS models
NASA Technical Reports Server (NTRS)
Ragan, R. M.; Jackson, T. J.
1976-01-01
The land cover requirements of the Soil Conservation Service (SCS) Model used for hydrograph synthesis in urban areas were modified to be LANDSAT compatible. The Curve Numbers obtained with these alternate land cover categories compare well with those obtained in published example problems using the conventional categories. Emergency spillway hydrographs and synthetic flood frequency flows computed for a 21.1 sq. mi. test area showed excellent agreement between the conventional aerial photo-based and the Landsat-based SCS approaches.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted
2016-09-16
distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and
TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases
1990-09-01
IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
Students concept understanding of fluid static based on the types of teaching
NASA Astrophysics Data System (ADS)
Rahmawati, I. D.; Suparmi; Sunarno, W.
2018-03-01
This research aims to know the concept understanding of student are taught by guided inquiry based learning and conventional based learning. Subjects in this study are high school students as much as 2 classes and each class consists of 32 students, both classes are homogen. The data was collected by conceptual test in the multiple choice form with the students argumentation of the answer. The data analysis used is qualitative descriptive method. The results of the study showed that the average of class that was using guided inquiry based learning is 78.44 while the class with use conventional based learning is 65.16. Based on these data, the guided inquiry model is an effective learning model used to improve students concept understanding.
Long-Boyle, Janel; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J.; Dvorak, Christopher C.
2014-01-01
Background Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared to conventional dosing. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration-at-steady-state, Css) and implement a simple, model-based tool for the initial dosing of busulfan in children undergoing HCT. Patients and Methods Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone HCT with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the non-linear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly, Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Results Modeling of busulfan time-concentration data indicates busulfan CL displays non-linearity in children, decreasing up to approximately 20% between the concentrations of 250–2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan CL were actual body weight and age. The percentage of individuals achieving a therapeutic Css was significantly higher in subjects receiving initial doses based on the population PK model (81%) versus historical controls dosed on conventional guidelines (52%) (p = 0.02). Conclusion When compared to the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults. PMID:25162216
ERIC Educational Resources Information Center
Hovelja, Tomaž; Vavpotic, Damjan; Žvanut, Boštjan
2016-01-01
The evaluation of e-learning and conventional pedagogical activities in nursing programmes has focused either on a single pedagogical activity or the entire curriculum, and only on students' or teachers' perspective. The goal of this study was to design and test a novel approach for evaluation of e-learning and conventional pedagogical activities…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreasen, Daniel, E-mail: dana@dtu.dk; Van Leemput, Koen; Hansen, Rasmus H.
Purpose: In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so-called pseudo computed tomography (pCT). This is a nontrivial task, since the voxel-intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel-based or atlas-based models have typically been used. The voxel-based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas-based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of amore » patch-based method for creating a pCT based on conventional T{sub 1}-weighted MRI scans without using deformable registrations. We compare this method against two state-of-the-art methods within the voxel-based and atlas-based categories. Methods: The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel-wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan. Results: The pCTs produced with the patch-based method had the best voxel-wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas-based method. In terms of the dosimetric accuracy, the patch-based method had average deviations of less than 0.5% in measures related to target coverage. Conclusions: We showed that a patch-based method could generate an accurate pCT based on conventional T{sub 1}-weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel-based and atlas-based methods and showed a promising potential for RT of the brain based only on MRI.« less
NASA Astrophysics Data System (ADS)
Hassell, David; Gregory, Jonathan; Blower, Jon; Lawrence, Bryan N.; Taylor, Karl E.
2017-12-01
The CF (Climate and Forecast) metadata conventions are designed to promote the creation, processing, and sharing of climate and forecasting data using Network Common Data Form (netCDF) files and libraries. The CF conventions provide a description of the physical meaning of data and of their spatial and temporal properties, but they depend on the netCDF file encoding which can currently only be fully understood and interpreted by someone familiar with the rules and relationships specified in the conventions documentation. To aid in development of CF-compliant software and to capture with a minimal set of elements all of the information contained in the CF conventions, we propose a formal data model for CF which is independent of netCDF and describes all possible CF-compliant data. Because such data will often be analysed and visualised using software based on other data models, we compare our CF data model with the ISO 19123 coverage model, the Open Geospatial Consortium CF netCDF standard, and the Unidata Common Data Model. To demonstrate that this CF data model can in fact be implemented, we present cf-python, a Python software library that conforms to the model and can manipulate any CF-compliant dataset.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
Viewing Knowledge Bases as Qualitative Models.
ERIC Educational Resources Information Center
Clancey, William J.
The concept of a qualitative model provides a unifying perspective for understanding how expert systems differ from conventional programs. Knowledge bases contain qualitative models of systems in the world, that is, primarily non-numeric descriptions that provide a basis for explaining and predicting behavior and formulating action plans. The…
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
NASA Astrophysics Data System (ADS)
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Automatic mathematical modeling for real time simulation system
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1988-01-01
A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.
Practice Makes Perfect: Using a Computer-Based Business Simulation in Entrepreneurship Education
ERIC Educational Resources Information Center
Armer, Gina R. M.
2011-01-01
This article explains the use of a specific computer-based simulation program as a successful experiential learning model and as a way to increase student motivation while augmenting conventional methods of business instruction. This model is based on established adult learning principles.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-01-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035
On macromolecular refinement at subatomic resolution with interatomic scatterers.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2007-11-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
Automatic mathematical modeling for space application
NASA Technical Reports Server (NTRS)
Wang, Caroline K.
1987-01-01
A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.
Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes
NASA Astrophysics Data System (ADS)
Bachlechner, Martina E.
2009-03-01
Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.
Gene-expression programming for flip-bucket spillway scour.
Guven, Aytac; Azamathulla, H Md
2012-01-01
During the last two decades, researchers have noticed that the use of soft computing techniques as an alternative to conventional statistical methods based on controlled laboratory or field data, gave significantly better results. Gene-expression programming (GEP), which is an extension to genetic programming (GP), has nowadays attracted the attention of researchers in prediction of hydraulic data. This study presents GEP as an alternative tool in the prediction of scour downstream of a flip-bucket spillway. Actual field measurements were used to develop GEP models. The proposed GEP models are compared with the earlier conventional GP results of others (Azamathulla et al. 2008b; RMSE = 2.347, δ = 0.377, R = 0.842) and those of commonly used regression-based formulae. The predictions of GEP models were observed to be in strictly good agreement with measured ones, and quite a bit better than conventional GP and the regression-based formulae. The results are tabulated in terms of statistical error measures (GEP1; RMSE = 1.596, δ = 0.109, R = 0.917) and illustrated via scatter plots.
Least-squares model-based halftoning
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Neuhoff, David L.
1992-08-01
A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.
Modeling method of time sequence model based grey system theory and application proceedings
NASA Astrophysics Data System (ADS)
Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang
2015-12-01
This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.
Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam
NASA Astrophysics Data System (ADS)
Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa
2017-08-01
In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.
Time-dependent oral absorption models
NASA Technical Reports Server (NTRS)
Higaki, K.; Yamashita, S.; Amidon, G. L.
2001-01-01
The plasma concentration-time profiles following oral administration of drugs are often irregular and cannot be interpreted easily with conventional models based on first- or zero-order absorption kinetics and lag time. Six new models were developed using a time-dependent absorption rate coefficient, ka(t), wherein the time dependency was varied to account for the dynamic processes such as changes in fluid absorption or secretion, in absorption surface area, and in motility with time, in the gastrointestinal tract. In the present study, the plasma concentration profiles of propranolol obtained in human subjects following oral dosing were analyzed using the newly derived models based on mass balance and compared with the conventional models. Nonlinear regression analysis indicated that the conventional compartment model including lag time (CLAG model) could not predict the rapid initial increase in plasma concentration after dosing and the predicted Cmax values were much lower than that observed. On the other hand, all models with the time-dependent absorption rate coefficient, ka(t), were superior to the CLAG model in predicting plasma concentration profiles. Based on Akaike's Information Criterion (AIC), the fluid absorption model without lag time (FA model) exhibited the best overall fit to the data. The two-phase model including lag time, TPLAG model was also found to be a good model judging from the values of sum of squares. This model also described the irregular profiles of plasma concentration with time and frequently predicted Cmax values satisfactorily. A comparison of the absorption rate profiles also suggested that the TPLAG model is better at prediction of irregular absorption kinetics than the FA model. In conclusion, the incorporation of a time-dependent absorption rate coefficient ka(t) allows the prediction of nonlinear absorption characteristics in a more reliable manner.
Fukuda, Haruhisa; Kuroki, Manabu
2016-03-01
To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.
Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling
NASA Technical Reports Server (NTRS)
George, T. S.; Taylor, R. S.
1982-01-01
Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.
A hybrid algorithm for clustering of time series data based on affinity search technique.
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.
A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966
A Gestalt Model for Improving Convention and Conventional Relationships
ERIC Educational Resources Information Center
Coven, Arnold B.; And Others
1978-01-01
This article presents a brief overview of Gestalt theory, the group interventions utilized to experiment with interpersonal contact in a conference workshop along with their theory base, an evaluation of the workshop, and some experimental ideas and recommended activities that group leaders may want to incorporate into similar interpersonal…
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Systematic methods for the design of a class of fuzzy logic controllers
NASA Astrophysics Data System (ADS)
Yasin, Saad Yaser
2002-09-01
Fuzzy logic control, a relatively new branch of control, can be used effectively whenever conventional control techniques become inapplicable or impractical. Various attempts have been made to create a generalized fuzzy control system and to formulate an analytically based fuzzy control law. In this study, two methods, the left and right parameterization method and the normalized spline-base membership function method, were utilized for formulating analytical fuzzy control laws in important practical control applications. The first model was used to design an idle speed controller, while the second was used to control an inverted control problem. The results of both showed that a fuzzy logic control system based on the developed models could be used effectively to control highly nonlinear and complex systems. This study also investigated the application of fuzzy control in areas not fully utilizing fuzzy logic control. Three important practical applications pertaining to the automotive industries were studied. The first automotive-related application was the idle speed of spark ignition engines, using two fuzzy control methods: (1) left and right parameterization, and (2) fuzzy clustering techniques and experimental data. The simulation and experimental results showed that a conventional controller-like performance fuzzy controller could be designed based only on experimental data and intuitive knowledge of the system. In the second application, the automotive cruise control problem, a fuzzy control model was developed using parameters adaptive Proportional plus Integral plus Derivative (PID)-type fuzzy logic controller. Results were comparable to those using linearized conventional PID and linear quadratic regulator (LQR) controllers and, in certain cases and conditions, the developed controller outperformed the conventional PID and LQR controllers. The third application involved the air/fuel ratio control problem, using fuzzy clustering techniques, experimental data, and a conversion algorithm, to develop a fuzzy-based control algorithm. Results were similar to those obtained by recently published conventional control based studies. The influence of the fuzzy inference operators and parameters on performance and stability of the fuzzy logic controller was studied Results indicated that, the selections of certain parameters or combinations of parameters, affect greatly the performance and stability of the fuzzy controller. Diagnostic guidelines used to tune or change certain factors or parameters to improve controller performance were developed based on knowledge gained from conventional control methods and knowledge gained from the experimental and the simulation results of this study.
CF Metadata Conventions: Founding Principles, Governance, and Future Directions
NASA Astrophysics Data System (ADS)
Taylor, K. E.
2016-12-01
The CF Metadata Conventions define attributes that promote sharing of climate and forecasting data and facilitate automated processing by computers. The development, maintenance, and evolution of the conventions have mainly been provided by voluntary community contributions. Nevertheless, an organizational framework has been established, which relies on established rules and web-based discussion to ensure smooth (but relatively efficient) evolution of the standard to accommodate new types of data. The CF standard has been essential to the success of high-profile internationally-coordinated modeling activities (e.g, the Coupled Model Intercomparison Project). A summary of CF's founding principles and the prospects for its future evolution will be discussed.
Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U
2016-01-01
The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.
Comparing field- and model-based standing dead tree carbon stock estimates across forests of the US
Chistopher W. Woodall; Grant M. Domke; David W. MacFarlane; Christopher M. Oswalt
2012-01-01
As signatories to the United Nation Framework Convention on Climate Change, the US has been estimating standing dead tree (SDT) carbon (C) stocks using a model based on live tree attributes. The USDA Forest Service began sampling SDTs nationwide in 1999. With comprehensive field data now available, the objective of this study was to compare field- and model-based...
Model based control of dynamic atomic force microscope.
Lee, Chibum; Salapaka, Srinivasa M
2015-04-01
A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.
Insights from intercomparison of microbial and conventional soil models
NASA Astrophysics Data System (ADS)
Allison, S. D.; Li, J.; Luo, Y.; Mayes, M. A.; Wang, G.
2014-12-01
Changing the structure of soil biogeochemical models to represent coupling between microbial biomass and carbon substrate pools could improve predictions of carbon-climate feedbacks. So-called "microbial models" with this structure make very different predictions from conventional models based on first-order decay of carbon substrate pools. Still, the value of microbial models is uncertain because microbial physiological parameters are poorly constrained and model behaviors have not been fully explored. To address these issues, we developed an approach for inter-comparing microbial and conventional models. We initially focused on soil carbon responses to microbial carbon use efficiency (CUE) and temperature. Three scenarios were implemented in all models at a common reference temperature (20°C): constant CUE (held at 0.31), varied CUE (-0.016°C-1), and 50% acclimated CUE (-0.008°C-1). Whereas the conventional model always showed soil carbon losses with increasing temperature, the microbial models each predicted a temperature threshold above which warming led to soil carbon gain. The location of this threshold depended on CUE scenario, with higher temperature thresholds under the acclimated and constant scenarios. This result suggests that the temperature sensitivity of CUE and the structure of the soil carbon model together regulate the long-term soil carbon response to warming. Compared to the conventional model, all microbial models showed oscillatory behavior in response to perturbations and were much less sensitive to changing inputs. Oscillations were weakest in the most complex model with explicit enzyme pools, suggesting that multi-pool coupling might be a more realistic representation of the soil system. This study suggests that model structure and CUE parameterization should be carefully evaluated when scaling up microbial models to ecosystems and the globe.
Hwang, Hee Sang; Yoon, Dok Hyun; Suh, Cheolwon; Huh, Jooryung
2016-08-01
Extranodal involvement is a well-known prognostic factor in patients with diffuse large B-cell lymphomas (DLBCL). Nevertheless, the prognostic impact of the extranodal scoring system included in the conventional international prognostic index (IPI) has been questioned in an era where rituximab treatment has become widespread. We investigated the prognostic impacts of individual sites of extranodal involvement in 761 patients with DLBCL who received rituximab-based chemoimmunotherapy. Subsequently, we established a new extranodal scoring system based on extranodal sites, showing significant prognostic correlation, and compared this system with conventional scoring systems, such as the IPI and the National Comprehensive Cancer Network-IPI (NCCN-IPI). An internal validation procedure, using bootstrapped samples, was also performed for both univariate and multivariate models. Using multivariate analysis with a backward variable selection, we found nine extranodal sites (the liver, lung, spleen, central nervous system, bone marrow, kidney, skin, adrenal glands, and peritoneum) that remained significant for use in the final model. Our newly established extranodal scoring system, based on these sites, was better correlated with patient survival than standard scoring systems, such as the IPI and the NCCN-IPI. Internal validation by bootstrapping demonstrated an improvement in model performance of our modified extranodal scoring system. Our new extranodal scoring system, based on the prognostically relevant sites, may improve the performance of conventional prognostic models of DLBCL in the rituximab era and warrants further external validation using large study populations.
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
ERIC Educational Resources Information Center
Sripongwiwat, Supathida; Bunterm, Tassanee; Srisawat, Niwat; Tang, Keow Ngang
2016-01-01
The aim of this study was to examine the effect, after intervention on both experimental and control groups, of constructionism and neurocognitive-based teaching model, and conventional teaching model, on the science learning outcomes and creative thinking of Grade 11 students. The researchers developed a constructionism and neurocognitive-based…
Rate-Based Model Predictive Control of Turbofan Engine Clearance
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.
2006-01-01
An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis
2007-01-01
Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.
NASA Astrophysics Data System (ADS)
Simpson, Mike; Ives, Matthew; Hall, Jim
2016-04-01
There is an increasing body of evidence in support of the use of nature based solutions as a strategy to mitigate drought. Restored or constructed wetlands, grasslands and in some cases forests have been used with success in numerous case studies. Such solutions remain underused in the UK, where they are not considered as part of long-term plans for supply by water companies. An important step is the translation of knowledge on the benefits of nature based solutions at the upland/catchment scale into a model of the impact of these solutions on national water resource planning in terms of financial costs, carbon benefits and robustness to drought. Our project, 'A National Scale Model of Green Infrastructure for Water Resources', addresses this issue through development of a model that can show the costs and benefits associated with a broad roll-out of nature based solutions for water supply. We have developed generalised models of both the hydrological effects of various classes and implementations of nature-based approaches and their economic impacts in terms of construction costs, running costs, time to maturity, land use and carbon benefits. Our next step will be to compare this work with our recent evaluation of conventional water infrastructure, allowing a case to be made in financial terms and in terms of security of water supply. By demonstrating the benefits of nature based solutions under multiple possible climate and population scenarios we aim to demonstrate the potential value of using nature based solutions as a component of future long-term water resource plans. Strategies for decision making regarding the selection of nature based and conventional approaches, developed through discussion with government and industry, will be applied to the final model. Our focus is on keeping our work relevant to the requirements of decision-makers involved in conventional water planning. We propose to present the outcomes of our model for the evaluation of nature-based solutions at catchment scale and ongoing results of our national-scale model.
Toward a better understanding of helicopter stability derivatives
NASA Technical Reports Server (NTRS)
Hansen, R. S.
1982-01-01
An amended six degree of freedom helicopter stability and control derivative model was developed in which body acceleration and control rate derivatives were included in the Taylor series expansion. These additional derivatives were derived from consideration of the effects of the higher order rotor flapping dynamics, which are known to be inadequately represented in the conventional six degree of freedom, quasistatic stability derivative model. The amended model was a substantial improvement over the conventional model, effectively doubling the unsable bandwidth and providing a more accurate representation of the short period and cross axis characteristics. Further investigations assessed the applicability of the two stability derivative model structures for flight test parameter identification. Parameters were identified using simulation data generated from a higher order base line model having sixth order rotor tip path plane dynamics. Three lower order models were identified: one using the conventional stability derivative model structure, a second using the amended six degree of freedom model structure, and a third model having eight degrees of freedom that included a simplified rotor tip path plane tilt representation.
Boeddinghaus, Moritz; Breloer, Eva Sabina; Rehmann, Peter; Wöstmann, Bernd
2015-11-01
The purpose of this clinical study was to compare the marginal fit of dental crowns based on three different intraoral digital and one conventional impression methods. Forty-nine teeth of altogether 24 patients were prepared to be treated with full-coverage restorations. Digital impressions were made using three intraoral scanners: Sirona CEREC AC Omnicam (OCam), Heraeus Cara TRIOS and 3M Lava True Definition (TDef). Furthermore, a gypsum model based on a conventional impression (EXA'lence, GC, Tokyo, Japan) was scanned with a standard laboratory scanner (3Shape D700). Based on the dataset obtained, four zirconia copings per tooth were produced. The marginal fit of the copings in the patient's mouth was assessed employing a replica technique. Overall, seven measurement copings did not fit and, therefore, could not be assessed. The marginal gap was 88 μm (68-136 μm) [median/interquartile range] for the TDef, 112 μm (94-149 μm) for the Cara TRIOS, 113 μm (81-157 μm) for the laboratory scanner and 149 μm (114-218 μm) for the OCam. There was a statistically significant difference between the OCam and the other groups (p < 0.05). Within the limitations of this study, it can be concluded that zirconia copings based on intraoral scans and a laboratory scans of a conventional model are comparable to one another with regard to their marginal fit. Regarding the results of this study, the digital intraoral impression can be considered as an alternative to a conventional impression with a consecutive digital workflow when the finish line is clearly visible and it is possible to keep it dry.
Tres, A; van der Veer, G; Perez-Marin, M D; van Ruth, S M; Garrido-Varo, A
2012-08-22
Organic products tend to retail at a higher price than their conventional counterparts, which makes them susceptible to fraud. In this study we evaluate the application of near-infrared spectroscopy (NIRS) as a rapid, cost-effective method to verify the organic identity of feed for laying hens. For this purpose a total of 36 organic and 60 conventional feed samples from The Netherlands were measured by NIRS. A binary classification model (organic vs conventional feed) was developed using partial least squares discriminant analysis. Models were developed using five different data preprocessing techniques, which were externally validated by a stratified random resampling strategy using 1000 realizations. Spectral regions related to the protein and fat content were among the most important ones for the classification model. The models based on data preprocessed using direct orthogonal signal correction (DOSC), standard normal variate (SNV), and first and second derivatives provided the most successful results in terms of median sensitivity (0.91 in external validation) and median specificity (1.00 for external validation of SNV models and 0.94 for DOSC and first and second derivative models). A previously developed model, which was based on fatty acid fingerprinting of the same set of feed samples, provided a higher sensitivity (1.00). This shows that the NIRS-based approach provides a rapid and low-cost screening tool, whereas the fatty acid fingerprinting model can be used for further confirmation of the organic identity of feed samples for laying hens. These methods provide additional assurance to the administrative controls currently conducted in the organic feed sector.
NASA Astrophysics Data System (ADS)
Werner, K.; Liu, F. M.; Ostapchenko, S.; Pierog, T.
2004-11-01
After discussing conceptual problems with the conventional string model, we present a new approach, based on a theoretically consistent multiple scattering formalism. First results for proton-proton scattering at 158 GeV are discussed.
Confounder summary scores when comparing the effects of multiple drug exposures.
Cadarette, Suzanne M; Gagne, Joshua J; Solomon, Daniel H; Katz, Jeffrey N; Stürmer, Til
2010-01-01
Little information is available comparing methods to adjust for confounding when considering multiple drug exposures. We compared three analytic strategies to control for confounding based on measured variables: conventional multivariable, exposure propensity score (EPS), and disease risk score (DRS). Each method was applied to a dataset (2000-2006) recently used to examine the comparative effectiveness of four drugs. The relative effectiveness of risedronate, nasal calcitonin, and raloxifene in preventing non-vertebral fracture, were each compared to alendronate. EPSs were derived both by using multinomial logistic regression (single model EPS) and by three separate logistic regression models (separate model EPS). DRSs were derived and event rates compared using Cox proportional hazard models. DRSs derived among the entire cohort (full cohort DRS) was compared to DRSs derived only among the referent alendronate (unexposed cohort DRS). Less than 8% deviation from the base estimate (conventional multivariable) was observed applying single model EPS, separate model EPS or full cohort DRS. Applying the unexposed cohort DRS when background risk for fracture differed between comparison drug exposure cohorts resulted in -7 to + 13% deviation from our base estimate. With sufficient numbers of exposed and outcomes, either conventional multivariable, EPS or full cohort DRS may be used to adjust for confounding to compare the effects of multiple drug exposures. However, our data also suggest that unexposed cohort DRS may be problematic when background risks differ between referent and exposed groups. Further empirical and simulation studies will help to clarify the generalizability of our findings.
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Li, Jing; Kim, Seongho; Shields, Anthony F; Douglas, Kirk A; McHugh, Christopher I; Lawhorn-Crews, Jawana M; Wu, Jianmei; Mangner, Thomas J; LoRusso, Patricia M
2016-11-01
FAU, a pyrimidine nucleotide analogue, is a prodrug bioactivated by intracellular thymidylate synthase to form FMAU, which is incorporated into DNA, causing cell death. This study presents a model-based approach to integrating dynamic positron emission tomography (PET) and conventional plasma pharmacokinetic studies to characterize the plasma and tissue pharmacokinetics of FAU and FMAU. Twelve cancer patients were enrolled into a phase 1 study, where conventional plasma pharmacokinetic evaluation of therapeutic FAU (50-1600 mg/m 2 ) and dynamic PET assessment of 18 F-FAU were performed. A parent-metabolite population pharmacokinetic model was developed to simultaneously fit PET-derived tissue data and conventional plasma pharmacokinetic data. The developed model enabled separation of PET-derived total tissue concentrations into the parent drug and metabolite components. The model provides quantitative, mechanistic insights into the bioactivation of FAU and retention of FMAU in normal and tumor tissues and has potential utility to predict tumor responsiveness to FAU treatment. © 2016, The American College of Clinical Pharmacology.
Esfahanian, Mehri; Shokuhi Rad, Ali; Khoshhal, Saeed; Najafpour, Ghasem; Asghari, Behnam
2016-07-01
In this paper, genetic algorithm was used to investigate mathematical modeling of ethanol fermentation in a continuous conventional bioreactor (CCBR) and a continuous membrane bioreactor (CMBR) by ethanol permselective polydimethylsiloxane (PDMS) membrane. A lab scale CMBR with medium glucose concentration of 100gL(-1) and Saccharomyces cerevisiae microorganism was designed and fabricated. At dilution rate of 0.14h(-1), maximum specific cell growth rate and productivity of 0.27h(-1) and 6.49gL(-1)h(-1) were respectively found in CMBR. However, at very high dilution rate, the performance of CMBR was quite similar to conventional fermentation on account of insufficient incubation time. In both systems, genetic algorithm modeling of cell growth, ethanol production and glucose concentration were conducted based on Monod and Moser kinetic models during each retention time at unsteady condition. The results showed that Moser kinetic model was more satisfactory and desirable than Monod model. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Yasin; Khare, Vaibhav Rai; Mathur, Jyotirmay
The paper describes a parametric study developed to estimate the energy savings potential of a radiant cooling system installed in a commercial building in India. The study is based on numerical modeling of a radiant cooling system installed in an Information Technology (IT) office building sited in the composite climate of Hyderabad. To evaluate thermal performance and energy consumption, simulations were carried out using the ANSYS FLUENT and EnergyPlus softwares, respectively. The building model was calibrated using the measured data for the installed radiant system. Then this calibrated model was used to simulate the energy consumption of a building usingmore » a conventional all-air system to determine the proportional energy savings. For proper handling of the latent load, a dedicated outside air system (DOAS) was used as an alternative to Fan Coil Unit (FCU). A comparison of energy consumption calculated that the radiant system was 17.5 % more efficient than a conventional all-air system and that a 30% savings was achieved by using a DOAS system compared with a conventional system. Computational Fluid Dynamics (CFD) simulation was performed to evaluate indoor air quality and thermal comfort. It was found that a radiant system offers more uniform temperatures, as well as a better mean air temperature range, than a conventional system. To further enhance the energy savings in the radiant system, different operational strategies were analyzed based on thermal analysis using EnergyPlus. Lastly, the energy savings achieved in this parametric run were more than 10% compared with a conventional all-air system.« less
ERIC Educational Resources Information Center
Turnip, Betty; Wahyuni, Ida; Tanjung, Yul Ifda
2016-01-01
One of the factors that can support successful learning activity is the use of learning models according to the objectives to be achieved. This study aimed to analyze the differences in problem-solving ability Physics student learning model Inquiry Training based on Just In Time Teaching [JITT] and conventional learning taught by cooperative model…
ERIC Educational Resources Information Center
Huh, Seonmin
2016-01-01
This article explores the general patterns of interactions between the teacher and students during the different instructional steps when the teacher attempted to incorporate both conventional skill-based reading and critical literacy in an English as a foreign language (EFL) literacy class in a Korean university. There has been a paucity of EFL…
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Enzymatic corn wet milling: engineering process and cost model
Ramírez, Edna C; Johnston, David B; McAloon, Andrew J; Singh, Vijay
2009-01-01
Background Enzymatic corn wet milling (E-milling) is a process derived from conventional wet milling for the recovery and purification of starch and co-products using proteases to eliminate the need for sulfites and decrease the steeping time. In 2006, the total starch production in USA by conventional wet milling equaled 23 billion kilograms, including modified starches and starches used for sweeteners and ethanol production [1]. Process engineering and cost models for an E-milling process have been developed for a processing plant with a capacity of 2.54 million kg of corn per day (100,000 bu/day). These models are based on the previously published models for a traditional wet milling plant with the same capacity. The E-milling process includes grain cleaning, pretreatment, enzymatic treatment, germ separation and recovery, fiber separation and recovery, gluten separation and recovery and starch separation. Information for the development of the conventional models was obtained from a variety of technical sources including commercial wet milling companies, industry experts and equipment suppliers. Additional information for the present models was obtained from our own experience with the development of the E-milling process and trials in the laboratory and at the pilot plant scale. The models were developed using process and cost simulation software (SuperPro Designer®) and include processing information such as composition and flow rates of the various process streams, descriptions of the various unit operations and detailed breakdowns of the operating and capital cost of the facility. Results Based on the information from the model, we can estimate the cost of production per kilogram of starch using the input prices for corn, enzyme and other wet milling co-products. The work presented here describes the E-milling process and compares the process, the operation and costs with the conventional process. Conclusion The E-milling process was found to be cost competitive with the conventional process during periods of high corn feedstock costs since the enzymatic process enhances the yields of the products in a corn wet milling process. This model is available upon request from the authors for educational, research and non-commercial uses. PMID:19154623
A 3D generic inverse dynamic method using wrench notation and quaternion algebra.
Dumas, R; Aissaoui, R; de Guise, J A
2004-06-01
In the literature, conventional 3D inverse dynamic models are limited in three aspects related to inverse dynamic notation, body segment parameters and kinematic formalism. First, conventional notation yields separate computations of the forces and moments with successive coordinate system transformations. Secondly, the way conventional body segment parameters are defined is based on the assumption that the inertia tensor is principal and the centre of mass is located between the proximal and distal ends. Thirdly, the conventional kinematic formalism uses Euler or Cardanic angles that are sequence-dependent and suffer from singularities. In order to overcome these limitations, this paper presents a new generic method for inverse dynamics. This generic method is based on wrench notation for inverse dynamics, a general definition of body segment parameters and quaternion algebra for the kinematic formalism.
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.; Bhowmik, S. K. Roy; Das, A. K.
2018-01-01
The primary goal of present study is to investigate the impact of assimilation of conventional and satellite radiance observations in simulating the mesoscale convective system (MCS) formed over south east India. An assimilation methodology based on Weather Research and Forecasting model three dimensional variational data assimilation is considered. Few numerical experiments are carried out to examine the individual and combined impact of conventional and non-conventional (satellite radiance) observations. After the successful inclusion of additional observations, strong analysis increments of temperature and moisture fields are noticed and contributed to significant improvement in model's initial fields. The resulting model simulations are able to successfully reproduce the prominent synoptic features responsible for the initiation of MCS. Among all the experiments, the final experiment in which both conventional and satellite radiance observations assimilated has showed considerable impact on the prediction of MCS. The location, genesis, intensity, propagation and development of rain bands associated with the MCS are simulated reasonably well. The biases of simulated temperature, moisture and wind fields at surface and different pressure levels are reduced. Thermodynamic, dynamic and vertical structure of convective cells associated with the passage of MCS are well captured. Spatial distribution of rainfall is fairly reproduced and comparable to TRMM observations. It is demonstrated that incorporation of conventional and satellite radiance observations improved the local and synoptic representation of temperature, moisture fields from surface to different levels of atmosphere. This study highlights the importance of assimilation of conventional and satellite radiances in improving the models initial conditions and simulation of MCS.
Parameterising User Uptake in Economic Evaluations: The role of discrete choice experiments.
Terris-Prestholt, Fern; Quaife, Matthew; Vickerman, Peter
2016-02-01
Model-based economic evaluations of new interventions have shown that user behaviour (uptake) is a critical driver of overall impact achieved. However, early economic evaluations, prior to introduction, often rely on assumed levels of uptake based on expert opinion or uptake of similar interventions. In addition to the likely uncertainty surrounding these uptake assumptions, they also do not allow for uptake to be a function of product, intervention, or user characteristics. This letter proposes using uptake projections from discrete choice experiments (DCE) to better parameterize uptake and substitution in cost-effectiveness models. A simple impact model is developed and illustrated using an example from the HIV prevention field in South Africa. Comparison between the conventional approach and the DCE-based approach shows that, in our example, DCE-based impact predictions varied by up to 50% from conventional estimates and provided far more nuanced projections. In the absence of observed uptake data and to model the effect of variations in intervention characteristics, DCE-based uptake predictions are likely to greatly improve models parameterizing uptake solely based on expert opinion. This is particularly important for global and national level decision making around introducing new and probably more expensive interventions, particularly where resources are most constrained. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon
2014-01-01
One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.
Vibration Noise Modeling for Measurement While Drilling System Based on FOGs
Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu
2017-01-01
Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors’ noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn’t white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%. PMID:29039815
Vibration Noise Modeling for Measurement While Drilling System Based on FOGs.
Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu
2017-10-17
Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors' noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn't white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.
Evaluation of Generation Alternation Models in Evolutionary Robotics
NASA Astrophysics Data System (ADS)
Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro
For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.
Bottom, William P
2009-01-01
Conventional history of the predominant, research-based model of business education (RBM) traces its origins to programs initiated by the Ford Foundation after World War II. This paper maps the elite network responsible for developing behavioral science and the Ford Foundation agenda. Archival records of the actions taken by central nodes in the network permit identification of the original vision statement for the model. Analysis also permits tracking progress toward realizing that vision over several decades. Behavioral science was married to business education from the earliest stages of development. The RBM was a fundamental promise made by advocates for social science funding. Appraisals of the model and recommendations for reform must address its full history, not the partial, distorted view that is the conventional account. Implications of this more complete history for business education and for behavioral theory are considered.
Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V
2012-01-01
In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999
Articulatory speech synthesis and speech production modelling
NASA Astrophysics Data System (ADS)
Huang, Jun
This dissertation addresses the problem of speech synthesis and speech production modelling based on the fundamental principles of human speech production. Unlike the conventional source-filter model, which assumes the independence of the excitation and the acoustic filter, we treat the entire vocal apparatus as one system consisting of a fluid dynamic aspect and a mechanical part. We model the vocal tract by a three-dimensional moving geometry. We also model the sound propagation inside the vocal apparatus as a three-dimensional nonplane-wave propagation inside a viscous fluid described by Navier-Stokes equations. In our work, we first propose a combined minimum energy and minimum jerk criterion to estimate the dynamic vocal tract movements during speech production. Both theoretical error bound analysis and experimental results show that this method can achieve very close match at the target points and avoid the abrupt change in articulatory trajectory at the same time. Second, a mechanical vocal fold model is used to compute the excitation signal of the vocal tract. The advantage of this model is that it is closely coupled with the vocal tract system based on fundamental aerodynamics. As a result, we can obtain an excitation signal with much more detail than the conventional parametric vocal fold excitation model. Furthermore, strong evidence of source-tract interaction is observed. Finally, we propose a computational model of the fricative and stop types of sounds based on the physical principles of speech production. The advantage of this model is that it uses an exogenous process to model the additional nonsteady and nonlinear effects due to the flow mode, which are ignored by the conventional source- filter speech production model. A recursive algorithm is used to estimate the model parameters. Experimental results show that this model is able to synthesize good quality fricative and stop types of sounds. Based on our dissertation work, we carefully argue that the articulatory speech production model has the potential to flexibly synthesize natural-quality speech sounds and to provide a compact computational model for speech production that can be beneficial to a wide range of areas in speech signal processing.
An experiment-based comparative study of fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing
1989-01-01
An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.
Khan, Yasin; Khare, Vaibhav Rai; Mathur, Jyotirmay; ...
2015-03-26
The paper describes a parametric study developed to estimate the energy savings potential of a radiant cooling system installed in a commercial building in India. The study is based on numerical modeling of a radiant cooling system installed in an Information Technology (IT) office building sited in the composite climate of Hyderabad. To evaluate thermal performance and energy consumption, simulations were carried out using the ANSYS FLUENT and EnergyPlus softwares, respectively. The building model was calibrated using the measured data for the installed radiant system. Then this calibrated model was used to simulate the energy consumption of a building usingmore » a conventional all-air system to determine the proportional energy savings. For proper handling of the latent load, a dedicated outside air system (DOAS) was used as an alternative to Fan Coil Unit (FCU). A comparison of energy consumption calculated that the radiant system was 17.5 % more efficient than a conventional all-air system and that a 30% savings was achieved by using a DOAS system compared with a conventional system. Computational Fluid Dynamics (CFD) simulation was performed to evaluate indoor air quality and thermal comfort. It was found that a radiant system offers more uniform temperatures, as well as a better mean air temperature range, than a conventional system. To further enhance the energy savings in the radiant system, different operational strategies were analyzed based on thermal analysis using EnergyPlus. Lastly, the energy savings achieved in this parametric run were more than 10% compared with a conventional all-air system.« less
Espinosa, Gabriela; Annapragada, Ananth
2013-10-01
We evaluated three diagnostic strategies with the objective of comparing the current standard of care for individuals presenting acute chest pain and no history of coronary artery disease (CAD) with a novel diagnostic strategy using an emerging technology (blood-pool contrast agent [BPCA]) to identify the potential benefits and cost reductions. A decision analytic model of diagnostic strategies and outcomes using a BPCA and a conventional agent for CT angiography (CTA) in patients with acute chest pain was built. The model was used to evaluate three diagnostic strategies: CTA using a BPCA followed by invasive coronary angiography (ICA), CTA using a conventional agent followed by ICA, and ICA alone. The use of the two CTA-based triage tests before ICA in a population with a CAD prevalence of less than 47% was predicted to be more cost-effective than ICA alone. Using the base-case values and a cost premium for BPCA over the conventional CT agent (cost of BPCA ≈ 5× that of a conventional agent) showed that CTA with a BPCA before ICA resulted in the most cost-effective strategy; the other strategies were ruled out by simple dominance. The model strongly depends on the rates of complications from the diagnostic tests included in the model. In a population with an elevated risk of contrast-induced nephropathy (CIN), a significant premium cost per BPCA dose still resulted in the alternative whereby CTA using BPCA was more cost-effective than CTA using a conventional agent. A similar effect was observed for potential complications resulting from the BPCA injection. Conversely, in the presence of a similar complication rate from BPCA, the diagnostic strategy of CTA using a conventional agent would be the optimal alternative. BPCAs could have a significant impact in the diagnosis of acute chest pain, in particular for populations with high incidences of CIN. In addition, a BPCA strategy could garner further savings if currently excluded phenomena including renal disease and incidental findings were included in the decision model.
2007-03-01
column experiments were used to obtain model parameters . Cost data used in the model were based on conventional GAC installations, as modified to...43 Calculation of Parameters ...66 Determination of Parameter Values
ERIC Educational Resources Information Center
King, D.; And Others
1994-01-01
Discusses the computational problems of automating paper-based spatial information. A new relational structure for soil science information based on the main conceptual concepts used during conventional cartographic work is proposed. This model is a computerized framework for coherent description of the geographical variability of soils, combined…
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
ERIC Educational Resources Information Center
Burczyk, Krystyna
2011-01-01
In this article, the author discusses the creativity of origami and discusses a model she designed in May 2008 in Freiburg at the 20th International Origami Convention of Origami Deutschland. The model resulted from her investigation of a geometric model that exposes the centre part of a square paper sheet. The base model of the series called…
Spectroscopic evidence for Davydov-like solitons in acetanilide
NASA Astrophysics Data System (ADS)
Careri, G.; Buontempo, U.; Galluzzi, F.; Scott, A. C.; Gratton, E.; Shyamsunder, E.
1984-10-01
Detailed measurements of infrared absorption and Raman scattering on crystalline acetanilide [(CH3CONHC6H5)x] at low temperature show a new band close to the conventional amide I band. Equilibrium properties and spectroscopic data rule out explanations based on a conventional assignment, crystal defects, Fermi resonance, and upon frozen kinetics between two different subsystems. Thus we cannot account for this band using the concepts of conventional molecular spectroscopy, but a soliton model, similar to that proposed by Davydov for α-helix in protein, is in satisfactory agreement with the experimental data.
NASA Astrophysics Data System (ADS)
Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.
2016-12-01
NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
A novel energy recovery system for parallel hybrid hydraulic excavator.
Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan
2014-01-01
Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented.
A Novel Energy Recovery System for Parallel Hybrid Hydraulic Excavator
Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan
2014-01-01
Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented. PMID:25405215
Improved Virtual Planning for Bimaxillary Orthognathic Surgery.
Hatamleh, Muhanad; Turner, Catherine; Bhamrah, Gurprit; Mack, Gavin; Osher, Jonas
2016-09-01
Conventional model surgery planning for bimaxillary orthognathic surgery can be laborious, time-consuming and may contain potential errors; hence three-dimensional (3D) virtual orthognathic planning has been proven to be an efficient, reliable, and cost-effective alternative. In this report, the 3D planning is described for a patient presenting with a Class III incisor relationship on a Skeletal III base with pan facial asymmetry complicated by reverse overjet and anterior open bite. A combined scan data of direct cone beam computer tomography and indirect dental scan were used in the planning. Additionally, a new method of establishing optimum intercuspation by scanning dental casts in final occlusion and positioning it to the composite-scans model was shown. Furthermore, conventional model surgery planning was carried out following in-house protocol. Intermediate and final intermaxillary splints were produced following the conventional method and 3D printing. Three-dimensional planning showed great accuracy and treatment outcome and reduced laboratory time in comparison with the conventional method. Establishing the final dental occlusion on casts and integrating it in final 3D planning enabled us to achieve the best possible intercuspation.
Whole body acid-base modeling revisited.
Ring, Troels; Nielsen, Søren
2017-04-01
The textbook account of whole body acid-base balance in terms of endogenous acid production, renal net acid excretion, and gastrointestinal alkali absorption, which is the only comprehensive model around, has never been applied in clinical practice or been formally validated. To improve understanding of acid-base modeling, we managed to write up this conventional model as an expression solely on urine chemistry. Renal net acid excretion and endogenous acid production were already formulated in terms of urine chemistry, and we could from the literature also see gastrointestinal alkali absorption in terms of urine excretions. With a few assumptions it was possible to see that this expression of net acid balance was arithmetically identical to minus urine charge, whereby under the development of acidosis, urine was predicted to acquire a net negative charge. The literature already mentions unexplained negative urine charges so we scrutinized a series of seminal papers and confirmed empirically the theoretical prediction that observed urine charge did acquire negative charge as acidosis developed. Hence, we can conclude that the conventional model is problematic since it predicts what is physiologically impossible. Therefore, we need a new model for whole body acid-base balance, which does not have impossible implications. Furthermore, new experimental studies are needed to account for charge imbalance in urine under development of acidosis. Copyright © 2017 the American Physiological Society.
Eze, Valentine C; Phan, Anh N; Harvey, Adam P
2014-03-01
A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jeng, J T; Lee, T T
2000-01-01
A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
PSO-based PID Speed Control of Traveling Wave Ultrasonic Motor under Temperature Disturbance
NASA Astrophysics Data System (ADS)
Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Azmi, Nur Iffah Mohamed; Romlay, Fadhlur Rahman Mohd
2018-03-01
Traveling wave ultrasonic motors (TWUSMs) have a time varying dynamics characteristics. Temperature rise in TWUSMs remains a problem particularly in sustaining optimum speed performance. In this study, a PID controller is used to control the speed of TWUSM under temperature disturbance. Prior to developing the controller, a linear approximation model which relates the speed to the temperature is developed based on the experimental data. Two tuning methods are used to determine PID parameters: conventional Ziegler-Nichols(ZN) and particle swarm optimization (PSO). The comparison of speed control performance between PSO-PID and ZN-PID is presented. Modelling, simulation and experimental work is carried out utilizing Fukoku-Shinsei USR60 as the chosen TWUSM. The results of the analyses and experimental work reveal that PID tuning using PSO-based optimization has the advantage over the conventional Ziegler-Nichols method.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605
Performance of Ultra Wideband On-Body Communication Based on Statistical Channel Model
NASA Astrophysics Data System (ADS)
Wang, Qiong; Wang, Jianqing
Ultra wideband (UWB) on-body communication is attracting much attention in biomedical applications. In this paper, the performance of UWB on-body communication is investigated based on a statistically extracted on-body channel model, which provides detailed characteristics of the multi-path-affected channel with an emphasis on various body postures or body movement. The possible data rate, the possible communication distance, as well as the bit error rate (BER) performance are clarified via computer simulation. It is found that the conventional correlation receiver is incompetent in the multi-path-affected on-body channel, while the RAKE receiver outperforms the conventional correlation receiver at a cost of structure complexity. Different RAKE receiver structures are compared to show the improvement of the BER performance.
Modelling of thick composites using a layerwise laminate theory
NASA Technical Reports Server (NTRS)
Robbins, D. H., Jr.; Reddy, J. N.
1993-01-01
The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.
An alternative low-loss stack topology for vanadium redox flow battery: Comparative assessment
NASA Astrophysics Data System (ADS)
Moro, Federico; Trovò, Andrea; Bortolin, Stefano; Del, Davide, , Col; Guarnieri, Massimo
2017-02-01
Two vanadium redox flow battery topologies have been compared. In the conventional series stack, bipolar plates connect cells electrically in series and hydraulically in parallel. The alternative topology consists of cells connected in parallel inside stacks by means of monopolar plates in order to reduce shunt currents along channels and manifolds. Channelled and flat current collectors interposed between cells were considered in both topologies. In order to compute the stack losses, an equivalent circuit model of a VRFB cell was built from a 2D FEM multiphysics numerical model based on Comsol®, accounting for coupled electrical, electrochemical, and charge and mass transport phenomena. Shunt currents were computed inside the cells with 3D FEM models and in the piping and manifolds by means of equivalent circuits solved with Matlab®. Hydraulic losses were computed with analytical models in piping and manifolds and with 3D numerical analyses based on ANSYS Fluent® in the cell porous electrodes. Total losses in the alternative topology resulted one order of magnitude lower than in an equivalent conventional battery. The alternative topology with channelled current collectors exhibits the lowest shunt currents and hydraulic losses, with round-trip efficiency higher by about 10%, as compared to the conventional topology.
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
NASA Astrophysics Data System (ADS)
Afkhamipour, Morteza; Mofarahi, Masoud; Borhani, Tohid Nejad Ghaffar; Zanganeh, Masoud
2018-03-01
In this study, artificial neural network (ANN) and thermodynamic models were developed for prediction of the heat capacity ( C P ) of amine-based solvents. For ANN model, independent variables such as concentration, temperature, molecular weight and CO2 loading of amine were selected as the inputs of the model. The significance of the input variables of the ANN model on the C P values was investigated statistically by analyzing of correlation matrix. A thermodynamic model based on the Redlich-Kister equation was used to correlate the excess molar heat capacity ({C}_P^E) data as function of temperature. In addition, the effects of temperature and CO2 loading at different concentrations of conventional amines on the C P values were investigated. Both models were validated against experimental data and very good results were obtained between two mentioned models and experimental data of C P collected from various literatures. The AARD between ANN model results and experimental data of C P for 47 systems of amine-based solvents studied was 4.3%. For conventional amines, the AARD for ANN model and thermodynamic model in comparison with experimental data were 0.59% and 0.57%, respectively. The results showed that both ANN and Redlich-Kister models can be used as a practical tool for simulation and designing of CO2 removal processes by using amine solutions.
Comparison of Conventional and ANN Models for River Flow Forecasting
NASA Astrophysics Data System (ADS)
Jain, A.; Ganti, R.
2011-12-01
Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. River flow is generally estimated using time series or rainfall-runoff models. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been extensively adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conventional models. In this paper, a comparative study has been carried out for river flow forecasting using the conventional and ANN models. Among the conventional models, multiple linear, and non linear regression, and time series models of auto regressive (AR) type have been developed. Feed forward neural network model structure trained using the back propagation algorithm, a gradient search method, was adopted. The daily river flow data derived from Godavari Basin @ Polavaram, Andhra Pradesh, India have been employed to develop all the models included here. Two inputs, flows at two past time steps, (Q(t-1) and Q(t-2)) were selected using partial auto correlation analysis for forecasting flow at time t, Q(t). A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. It has been found that the regression and AR models performed comparably, and the ANN model performed the best amongst all the models investigated in this study. It is concluded that ANN model should be adopted in real catchments for hydrological modeling and forecasting.
Kim, Dae-Seung; Woo, Sang-Yoon; Yang, Hoon Joo; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Hwang, Soon Jung; Yi, Won-Jin
2014-12-01
Accurate surgical planning and transfer of the planning in orthognathic surgery are very important in achieving a successful surgical outcome with appropriate improvement. Conventionally, the paper surgery is performed based on a 2D cephalometric radiograph, and the results are expressed using cast models and an articulator. We developed an integrated orthognathic surgery system with 3D virtual planning and image-guided transfer. The maxillary surgery of orthognathic patients was planned virtually, and the planning results were transferred to the cast model by image guidance. During virtual planning, the displacement of the reference points was confirmed by the displacement from conventional paper surgery at each procedure. The results of virtual surgery were transferred to the physical cast models directly through image guidance. The root mean square (RMS) difference between virtual surgery and conventional model surgery was 0.75 ± 0.51 mm for 12 patients. The RMS difference between virtual surgery and image-guidance results was 0.78 ± 0.52 mm, which showed no significant difference from the difference of conventional model surgery. The image-guided orthognathic surgery system integrated with virtual planning will replace physical model surgical planning and enable transfer of the virtual planning directly without the need for an intermediate splint. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Cooke, Valerie; Arling, Greg; Lewis, Teresa; Abrahamson, Kathleen A.; Mueller, Christine; Edstrom, Lisa
2010-01-01
Purpose: Minnesota's Nursing Facility Performance-Based Incentive Payment Program (PIPP) supports provider-initiated projects aimed at improving care quality and efficiency. PIPP moves beyond conventional pay for performance. It seeks to promote implementation of evidence-based practices, encourage innovation and risk taking, foster collaboration…
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Yang, Yi; Tang, Shaojie
2013-03-01
Under the framework of model observer with signal and background exactly known (SKE/BKE), we investigate the detectability of differential phase contrast CT compared with that of the conventional attenuation-based CT. Using the channelized Hotelling observer and the radially symmetric difference-of-Gaussians channel template , we investigate the detectability index and its variation over the dimension of object and detector cells. The preliminary data show that the differential phase contrast CT outperforms the conventional attenuation-based CT significantly in the detectability index while both the object to be detected and the cell of detector used for data acquisition are relatively small. However, the differential phase contrast CT's dominance in the detectability index diminishes with increasing dimension of either object or detector cell, and virtually disappears while the dimension of object or detector cell approaches a threshold, respectively. It is hoped that the preliminary data reported in this paper may provide insightful understanding of the differential phase contrast CT's characteristic in the detectability index and its comparison with that of the conventional attenuation-based CT.
Introducing the VRT gas turbine combustor
NASA Technical Reports Server (NTRS)
Melconian, Jerry O.; Mostafa, Abdu A.; Nguyen, Hung Lee
1990-01-01
An innovative annular combustor configuration is being developed for aircraft and other gas turbine engines. This design has the potential of permitting higher turbine inlet temperatures by reducing the pattern factor and providing a major reduction in NO(x) emission. The design concept is based on a Variable Residence Time (VRT) technique which allows large fuel particles adequate time to completely burn in the circumferentially mixed primary zone. High durability of the combustor is achieved by dual function use of the incoming air. The feasibility of the concept was demonstrated by water analogue tests and 3-D computer modeling. The computer model predicted a 50 percent reduction in pattern factor when compared to a state of the art conventional combustor. The VRT combustor uses only half the number of fuel nozzles of the conventional configuration. The results of the chemical kinetics model require further investigation, as the NO(x) predictions did not correlate with the available experimental and analytical data base.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
ERIC Educational Resources Information Center
Smith, Douglas L.
1997-01-01
Describes a model for team developmental assessment of high-risk infants using a fiber-optic "distance learning" televideo network in south-central New York. An arena style transdisciplinary play-based assessment model was adapted for use across the televideo connection and close simulation of convention assessment procedures was…
Long-Term Evaluation of Ocean Tidal Variation Models of Polar Motion and UT1
NASA Astrophysics Data System (ADS)
Karbon, Maria; Balidakis, Kyriakos; Belda, Santiago; Nilsson, Tobias; Hagedoorn, Jan; Schuh, Harald
2018-04-01
Recent improvements in the development of VLBI (very long baseline interferometry) and other space geodetic techniques such as the global navigation satellite systems (GNSS) require very precise a-priori information of short-period (daily and sub-daily) Earth rotation variations. One significant contribution to Earth rotation is caused by the diurnal and semi-diurnal ocean tides. Within this work, we developed a new model for the short-period ocean tidal variations in Earth rotation, where the ocean tidal angular momentum model and the Earth rotation variation have been setup jointly. Besides the model of the short-period variation of the Earth's rotation parameters (ERP), based on the empirical ocean tide model EOT11a, we developed also ERP models, that are based on the hydrodynamic ocean tide models FES2012 and HAMTIDE. Furthermore, we have assessed the effect of uncertainties in the elastic Earth model on the resulting ERP models. Our proposed alternative ERP model to the IERS 2010 conventional model considers the elastic model PREM and 260 partial tides. The choice of the ocean tide model and the determination of the tidal velocities have been identified as the main uncertainties. However, in the VLBI analysis all models perform on the same level of accuracy. From these findings, we conclude that the models presented here, which are based on a re-examined theoretical description and long-term satellite altimetry observation only, are an alternative for the IERS conventional model but do not improve the geodetic results.
Analysis of Aluminum-Nitride SOI for High-Temperature Electronics
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Osman, Mohamed A.; Yu, Zhiping
2000-01-01
We use numerical simulation to investigate the high-temperature (up to 500K) operation of SOI MOSFETs with Aluminum-Nitride (AIN) buried insulators, rather than the conventional silicon-dioxide (SiO2). Because the thermal conductivity of AIN is about 100 times that of SiO2, AIN SOI should greatly reduce the often severe self-heating problem of conventional SOI, making SOI potentially suitable for high-temperature applications. A detailed electrothermal transport model is used in the simulations, and solved with a PDE solver called PROPHET In this work, we compare the performance of AIN-based SOI with that of SiO2-based SOI and conventional MOSFETs. We find that AIN SOI does indeed remove the self-heating penalty of SOL However, several device design trade-offs remain, which our simulations highlight.
CNN based approach for activity recognition using a wrist-worn accelerometer.
Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R
2017-07-01
In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.
Triangular model integrating clinical teaching and assessment
Abdelaziz, Adel; Koshak, Emad
2014-01-01
Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment. PMID:24624002
Triangular model integrating clinical teaching and assessment.
Abdelaziz, Adel; Koshak, Emad
2014-01-01
Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment.
Feedback loops and temporal misalignment in component-based hydrologic modeling
NASA Astrophysics Data System (ADS)
Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.
2011-12-01
In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.
Is the authoritative parenting model effective in changing oral hygiene behavior in adolescents?
Brukienė, Vilma; Aleksejūnienė, Jolanta
2012-12-01
This study examined whether the authoritative parenting model (APM) is more effective than conventional approaches for changing adolescent oral hygiene behavior. A total of 247 adolescents were recruited using a cluster random-sampling method. Subject groups were randomly allocated into an intervention group (APM-based interventions), a Control Group 1 (conventional dental education and behavior modification) or a Control Group 2 (conventional behavior modification). The results were assessed after 3 and 12 months. Oral hygiene level was assessed as percent dental plaque and the ratio of plaque percent change (RPC). At the 3-month follow-up, there were significant differences among the groups; the APM group had the largest decrease in plaque levels (24.5%), Control Group 1 showed a decrease in plaque levels of 15.4% and Control Group 2 showed an increase in plaque levels of 2.8%. At the 12-month follow-up, an improvement was observed in all groups, but there were no statistically significant differences among the groups. In the short term, the intervention based on the APM was more effective in changing adolescent oral hygiene behavior compared with the conventional approaches. The reasons for long-term positive change after discontinued interventions in control groups need to be explored in future studies.
A Physically Based Distributed Hydrologic Model with a no-conventional terrain analysis
NASA Astrophysics Data System (ADS)
Rulli, M.; Menduni, G.; Rosso, R.
2003-12-01
A physically based distributed hydrological model is presented. Starting from a contour-based terrain analysis, the model makes a no-conventional discretization of the terrain. From the maximum slope lines, obtained using the principles of minimum distance and orthogonality, the models obtains a stream tubes structure. The implemented model automatically can find the terrain morphological characteristics, e.g. peaks and saddles, and deal with them respecting the stream flow. Using this type of discretization, the model divides the elements in which the water flows in two classes; the cells, that are mixtilinear polygons where the overland flow is modelled as a sheet flow and channels, obtained by the interception of two or more stream tubes and whenever surface runoff occurs, the surface runoff is channelised. The permanent drainage paths can are calculated using one of the most common methods: threshold area, variable threshold area or curvature. The subsurface flow is modelled using the Simplified Bucket Model. The model considers three type of overland flow, depending on how it is produced:infiltration excess;saturation of superficial layer of the soil and exfiltration of sub-surface flow from upstream. The surface flow and the subsurface flow across a element are routed according with the mono-dimensional equation of the kinematic wave. The also model considers the spatial variability of the channels geometry with the flow. The channels have a rectangular section with length of the base decreasing with the distance from the outlet and depending on a power of the flow. The model was tested on the Rio Gallina and Missiaga catchments and the results showed model good performances.
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
Dynamic modeling and motion simulation for a winged hybrid-driven underwater glider
NASA Astrophysics Data System (ADS)
Wang, Shu-Xin; Sun, Xiu-Jun; Wang, Yan-Hui; Wu, Jian-Guo; Wang, Xiao-Ming
2011-03-01
PETREL, a winged hybrid-driven underwater glider is a novel and practical marine survey platform which combines the features of legacy underwater glider and conventional AUV (autonomous underwater vehicle). It can be treated as a multi-rigid-body system with a floating base and a particular hydrodynamic profile. In this paper, theorems on linear and angular momentum are used to establish the dynamic equations of motion of each rigid body and the effect of translational and rotational motion of internal masses on the attitude control are taken into consideration. In addition, due to the unique external shape with fixed wings and deflectable rudders and the dual-drive operation in thrust and glide modes, the approaches of building dynamic model of conventional AUV and hydrodynamic model of submarine are introduced, and the tailored dynamic equations of the hybrid glider are formulated. Moreover, the behaviors of motion in glide and thrust operation are analyzed based on the simulation and the feasibility of the dynamic model is validated by data from lake field trials.
Refugees in Conflict: Creating a Bridge Between Traditional and Conventional Health Belief Models.
Ben-Arye, Eran; Bonucci, Massimo; Daher, Michel; Kebudi, Rejin; Saad, Bashar; Breitkreuz, Thomas; Rassouli, Maryam; Rossi, Elio; Gafer, Nahla; Nimri, Omar; Hablas, Mohamed; Kienle, Gunver Sophia; Samuels, Noah; Silbermann, Michael
2018-06-01
The recent wave of migration from Middle Eastern countries to Europe presents significant challenges to the European health profession. These include the inevitable communication gap created by differences in health care beliefs between European oncologists, health care practitioners, and refugee patients. This article presents the conclusions of a workshop attended by a group of clinicians and researchers affiliated with the Middle East Cancer Consortium, as well as four European-based health-related organizations. Workshop participants included leading clinicians and medical educators from the field of integrative medicine and supportive cancer care from Italy, Germany, Turkey, Israel, Palestine, Iran, Lebanon, Jordan, Egypt, and Sudan. The workshop illustrated the need for creating a dialogue between European health care professionals and the refugee population in order to overcome the communication barriers to create healing process. The affinity for complementary and traditional medicine (CTM) among many refugee populations was also addressed, directing participants to the mediating role that integrative medicine serves between CTM and conventional medicine health belief models. This is especially relevant to the use of herbal medicine among oncology patients, for whom an open and nonjudgmental (yet evidence-based) dialogue is of utmost importance. The workshop concluded with a recommendation for the creation of a comprehensive health care model, to include bio-psycho-social and cultural-spiritual elements, addressing both acute and chronic medical conditions. These models need to be codesigned by European and Middle Eastern clinicians and researchers, internalizing a culturally sensitive approach and ethical commitment to the refugee population, as well as indigenous groups originating from Middle Eastern and north African countries. European oncologists face a communication gap with refugee patients who have recently immigrated from Middle Eastern and northern African countries, with their different health belief models and affinity for traditional and herbal medicine. A culturally sensitive approach to care will foster doctor-refugee communication, through the integration of evidence-based medicine within a nonjudgmental, bio-psycho-social-cultural-spiritual agenda, addressing patients' expectation within a supportive and palliative care context. Integrative physicians, who are conventional doctors trained in traditional/complementary medicine, can mediate between conventional and traditional/herbal paradigms of care, facilitating doctor-patient communication through education and by providing clinical consultations within conventional oncology centers. © AlphaMed Press 2017.
Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy.
Walsh, Catharine M; Sherlock, Mary E; Ling, Simon C; Carnahan, Heather
2012-06-13
Traditionally, training in gastrointestinal endoscopy has been based upon an apprenticeship model, with novice endoscopists learning basic skills under the supervision of experienced preceptors in the clinical setting. Over the last two decades, however, the growing awareness of the need for patient safety has brought the issue of simulation-based training to the forefront. While the use of simulation-based training may have important educational and societal advantages, the effectiveness of virtual reality gastrointestinal endoscopy simulators has yet to be clearly demonstrated. To determine whether virtual reality simulation training can supplement and/or replace early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. Health professions, educational and computer databases were searched until November 2011 including The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, Scopus, Web of Science, Biosis Previews, CINAHL, Allied and Complementary Medicine Database, ERIC, Education Full Text, CBCA Education, Career and Technical Education @ Scholars Portal, Education Abstracts @ Scholars Portal, Expanded Academic ASAP @ Scholars Portal, ACM Digital Library, IEEE Xplore, Abstracts in New Technologies and Engineering and Computer & Information Systems Abstracts. The grey literature until November 2011 was also searched. Randomised and quasi-randomised clinical trials comparing virtual reality endoscopy (oesophagogastroduodenoscopy, colonoscopy and sigmoidoscopy) simulation training versus any other method of endoscopy training including conventional patient-based training, in-job training, training using another form of endoscopy simulation (e.g. low-fidelity simulator), or no training (however defined by authors) were included. Trials comparing one method of virtual reality training versus another method of virtual reality training (e.g. comparison of two different virtual reality simulators) were also included. Only trials measuring outcomes on humans in the clinical setting (as opposed to animals or simulators) were included. Two authors (CMS, MES) independently assessed the eligibility and methodological quality of trials, and extracted data on the trial characteristics and outcomes. Due to significant clinical and methodological heterogeneity it was not possible to pool study data in order to perform a meta-analysis. Where data were available for each continuous outcome we calculated standardized mean difference with 95% confidence intervals based on intention-to-treat analysis. Where data were available for dichotomous outcomes we calculated relative risk with 95% confidence intervals based on intention-to-treat-analysis. Thirteen trials, with 278 participants, met the inclusion criteria. Four trials compared simulation-based training with conventional patient-based endoscopy training (apprenticeship model) whereas nine trials compared simulation-based training with no training. Only three trials were at low risk of bias. Simulation-based training, as compared with no training, generally appears to provide participants with some advantage over their untrained peers as measured by composite score of competency, independent procedure completion, performance time, independent insertion depth, overall rating of performance or competency error rate and mucosal visualization. Alternatively, there was no conclusive evidence that simulation-based training was superior to conventional patient-based training, although data were limited. The results of this systematic review indicate that virtual reality endoscopy training can be used to effectively supplement early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. However, there remains insufficient evidence to advise for or against the use of virtual reality simulation-based training as a replacement for early conventional endoscopy training (apprenticeship model) for health professions trainees with limited or no prior endoscopic experience. There is a great need for the development of a reliable and valid measure of endoscopic performance prior to the completion of further randomised clinical trials with high methodological quality.
ERIC Educational Resources Information Center
Layland, Judy
2010-01-01
Recent models relating to the affordance of children's participation rights, based on articles 12 and 13 of the United Nations Convention on the Rights Of the Child (1989), have focused on the role of and strategies used by the adults working with children ("Children and Society" 10, 2001: 107-117; "Children and Society" 20,…
ERIC Educational Resources Information Center
Fuad, Nur Miftahul; Zubaidah, Siti; Mahanal, Susriyati; Suarsini, Endang
2017-01-01
The aims of this study were (1) to find out the differences in critical thinking skills among students who were given three different learning models: differentiated science inquiry combined with mind map, differentiated science inquiry model, and conventional model, (2) to find out the differences of critical thinking skills among male and female…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, A.L.; Spigarelli, J.A.; Thommes, M.M.
1982-01-01
Two conventional fishery stock assessment models, the surplus-production model and the dynamic-pool model, were applied to assess the impacts of water withdrawals by electricity-generating plants, industries, and municipalities on the standing stocks and yields of alewife Alosa pseudoharengus, rainbow smelt Osmerus mordax, and yellow perch Perca flavescens in Lake Michigan. Impingement and entrainment estimates were based on data collected at 15 power plants. The surplus-production model was fitted to the three populations with catch and effort data from the commercial fisheries. Dynamic-pool model parameters were estimated from published data. The numbers entrained and impinged are large, but the proportions ofmore » the standing stocks impinged and the proportions of the eggs and larvae entrained are small. The reductions in biomass of the stocks and in maximum sustainable yields are larger than the proportions impinged. The reductions in biomass, based on 1975 data and an assumed full water withdrawal, are 2.86% for alewife, 0.76% for rainbow smelt, and 0.28% for yellow perch. Fishery models are an economical means of impact assessment in situations where catch and effort data are available for estimation of model parameters.« less
van IJsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Baka, N; Van't Klooster, R; Kaptein, B L
2016-08-01
An important measure for the diagnosis and monitoring of knee osteoarthritis is the minimum joint space width (mJSW). This requires accurate alignment of the x-ray beam with the tibial plateau, which may not be accomplished in practice. We investigate the feasibility of a new mJSW measurement method from stereo radiographs using 3D statistical shape models (SSM) and evaluate its sensitivity to changes in the mJSW and its robustness to variations in patient positioning and bone geometry. A validation study was performed using five cadaver specimens. The actual mJSW was varied and images were acquired with variation in the cadaver positioning. For comparison purposes, the mJSW was also assessed from plain radiographs. To study the influence of SSM model accuracy, the 3D mJSW measurement was repeated with models from the actual bones, obtained from CT scans. The SSM-based measurement method was more robust (consistent output for a wide range of input data/consistent output under varying measurement circumstances) than the conventional 2D method, showing that the 3D reconstruction indeed reduces the influence of patient positioning. However, the SSM-based method showed comparable sensitivity to changes in the mJSW with respect to the conventional method. The CT-based measurement was more accurate than the SSM-based measurement (smallest detectable differences 0.55 mm versus 0. 82 mm, respectively). The proposed measurement method is not a substitute for the conventional 2D measurement due to limitations in the SSM model accuracy. However, further improvement of the model accuracy and optimisation technique can be obtained. Combined with the promising options for applications using quantitative information on bone morphology, SSM based 3D reconstructions of natural knees are attractive for further development.Cite this article: E. A. van IJsseldijk, E. R. Valstar, B. C. Stoel, R. G. H. H. Nelissen, N. Baka, R. van't Klooster, B. L. Kaptein. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;320-327. DOI: 10.1302/2046-3758.58.2000626. © 2016 van IJsseldijk et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Alternative Strategies in Assessing Special Education Needs
ERIC Educational Resources Information Center
Dykeman, Bruce F.
2006-01-01
The conventional use of standardized testing within a discrepancy analysis model is reviewed. The Response-to-Intervention (RTI) process is explained, along with descriptions of assessment procedures within RTI: functional assessment, authentic assessment, curriculum-based measurement, and play-based assessment. Psychometric issues relevant to RTI…
NASA Astrophysics Data System (ADS)
Rizvi, Imran; Bulin, Anne-Laure; Anbil, Sriram R.; Briars, Emma A.; Vecchio, Daniela; Celli, Jonathan P.; Broekgaarden, Mans; Hasan, Tayyaba
2017-02-01
Targeting the molecular and cellular cues that influence treatment resistance in tumors is critical to effectively treating unresponsive populations of stubborn disease. The informed design of mechanism-based combinations is emerging as increasingly important to targeting resistance and improving the efficacy of conventional treatments, while minimizing toxicity. Photodynamic therapy (PDT) has been shown to synergize with conventional agents and to overcome the evasion pathways that cause resistance. Increasing evidence shows that PDT-based combinations cooperate mechanistically with, and improve the therapeutic index of, traditional chemotherapies. These and other findings emphasize the importance of including PDT as part of comprehensive treatment plans for cancer, particularly in complex disease sites. Identifying effective combinations requires a multi-faceted approach that includes the development of bioengineered cancer models and corresponding image analysis tools. The molecular and phenotypic basis of verteporfin-mediated PDT-based enhancement of chemotherapeutic efficacy and predictability in complex 3D models for ovarian cancer will be presented.
Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.
2009-01-01
A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.
Barlow, Brian T; McLawhorn, Alexander S; Westrich, Geoffrey H
2017-05-03
Dislocation remains a clinically important problem following primary total hip arthroplasty, and it is a common reason for revision total hip arthroplasty. Dual mobility (DM) implants decrease the risk of dislocation but can be more expensive than conventional implants and have idiosyncratic failure mechanisms. The purpose of this study was to investigate the cost-effectiveness of DM implants compared with conventional bearings for primary total hip arthroplasty. Markov model analysis was conducted from the societal perspective with use of direct and indirect costs. Costs, expressed in 2013 U.S. dollars, were derived from the literature, the National Inpatient Sample, and the Centers for Medicare & Medicaid Services. Effectiveness was expressed in quality-adjusted life years (QALYs). The model was populated with health state utilities and state transition probabilities derived from previously published literature. The analysis was performed for a patient's lifetime, and costs and effectiveness were discounted at 3% annually. The principal outcome was the incremental cost-effectiveness ratio (ICER), with a willingness-to-pay threshold of $100,000/QALY. Sensitivity analyses were performed to explore relevant uncertainty. In the base case, DM total hip arthroplasty showed absolute dominance over conventional total hip arthroplasty, with lower accrued costs ($39,008 versus $40,031 U.S. dollars) and higher accrued utility (13.18 versus 13.13 QALYs) indicating cost-savings. DM total hip arthroplasty ceased being cost-saving when its implant costs exceeded those of conventional total hip arthroplasty by $1,023, and the cost-effectiveness threshold for DM implants was $5,287 greater than that for conventional implants. DM was not cost-effective when the annualized incremental probability of revision from any unforeseen failure mechanism or mechanisms exceeded 0.29%. The probability of intraprosthetic dislocation exerted the most influence on model results. This model determined that, compared with conventional bearings, DM implants can be cost-saving for routine primary total hip arthroplasty, from the societal perspective, if newer-generation DM implants meet specific economic and clinical benchmarks. The differences between these thresholds and the performance of other contemporary bearings were frequently quite narrow. The results have potential application to the postmarket surveillance of newer-generation DM components. Economic and decision analysis Level III. See Instructions for Authors for a complete description of levels of evidence.
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
Applications of discrete element method in modeling of grain postharvest operations
USDA-ARS?s Scientific Manuscript database
Grain kernels are finite and discrete materials. Although flowing grain can behave like a continuum fluid at times, the discontinuous behavior exhibited by grain kernels cannot be simulated solely with conventional continuum-based computer modeling such as finite-element or finite-difference methods...
Payn, Robert A.; Hall, Robert O Jr.; Kennedy, Theodore A.; Poole, Geoff C; Marshall, Lucy A.
2017-01-01
Conventional methods for estimating whole-stream metabolic rates from measured dissolved oxygen dynamics do not account for the variation in solute transport times created by dynamic flow conditions. Changes in flow at hourly time scales are common downstream of hydroelectric dams (i.e. hydropeaking), and hydrologic limitations of conventional metabolic models have resulted in a poor understanding of the controls on biological production in these highly managed river ecosystems. To overcome these limitations, we coupled a two-station metabolic model of dissolved oxygen dynamics with a hydrologic river routing model. We designed calibration and parameter estimation tools to infer values for hydrologic and metabolic parameters based on time series of water quality data, achieving the ultimate goal of estimating whole-river gross primary production and ecosystem respiration during dynamic flow conditions. Our case study data for model design and calibration were collected in the tailwater of Glen Canyon Dam (Arizona, USA), a large hydropower facility where the mean discharge was 325 m3 s 1 and the average daily coefficient of variation of flow was 0.17 (i.e. the hydropeaking index averaged from 2006 to 2016). We demonstrate the coupled model’s conceptual consistency with conventional models during steady flow conditions, and illustrate the potential bias in metabolism estimates with conventional models during unsteady flow conditions. This effort contributes an approach to solute transport modeling and parameter estimation that allows study of whole-ecosystem metabolic regimes across a more diverse range of hydrologic conditions commonly encountered in streams and rivers.
Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah
2015-01-01
Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.
Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.
DTMs Assessment to the Definition of Shallow Landslides Prone Areas
NASA Astrophysics Data System (ADS)
Martins, Tiago D.; Oka-Fiori, Chisato; Carvalho Vieira, Bianca; Montgomery, David R.
2017-04-01
Predictive methods have been developed, especially since the 1990s, to identify landslide prone areas. One of the examples it is the physically based model SHALSTAB (Shallow Landsliding Stability Model), that calculate the potential instability for shallow landslides based on topography and physical soil properties. Normally, in such applications in Brazil, the Digital Terrain Model (DTM), is obtained mainly from conventional contour lines. However, recently the LiDAR (Light Detection and Ranging) system has been largely used in Brazil. Thus, this study aimed to evaluate different DTM's, generated from conventional data and LiDAR, and their influence in generating susceptibility maps to shallow landslides using SHALSTAB model. For that were analyzed the physical properties of soil, the response of the model when applying conventional topographical data and LiDAR's in the generation of DTM, and the shallow landslides susceptibility maps based on different topographical data. The selected area is in the urban perimeter of the municipality of Antonina (PR), affected by widespread landslides in March 2011. Among the results, it was evaluated different LiDAR data interpolation, using GIS tools, wherein the Triangulation/Natural Neighbor presented the best performance. It was also found that in one of evaluation indexes (Scars Concentration), the LiDAR derived DTM presented the best performance when compared with the one originated from contour lines, however, the Landslide Potential index, has presented a small increase. Consequently, it was possible to assess the DTM's, and the one derived from LiDAR improved very little the certitude percentage. It is also noted a gap in researches carried out in Brazil on the use of products generated from LiDAR data on geomorphological analysis.
Stürmer, Til; Joshi, Manisha; Glynn, Robert J.; Avorn, Jerry; Rothman, Kenneth J.; Schneeweiss, Sebastian
2006-01-01
Objective Propensity score analyses attempt to control for confounding in non-experimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study design and methods Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results Use of propensity scores increased from a total of 8 papers before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N=60) or surgical interventions (N=51), mainly in cardiology and cardiac surgery (N=90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 out of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods. PMID:16632131
Decisionmaking in practice: The dynamics of muddling through.
Flach, John M; Feufel, Markus A; Reynolds, Peter L; Parker, Sarah Henrickson; Kellogg, Kathryn M
2017-09-01
An alternative to conventional models that treat decisions as open-loop independent choices is presented. The alterative model is based on observations of work situations such as healthcare, where decisionmaking is more typically a closed-loop, dynamic, problem-solving process. The article suggests five important distinctions between the processes assumed by conventional models and the reality of decisionmaking in practice. It is suggested that the logic of abduction in the form of an adaptive, muddling through process is more consistent with the realities of practice in domains such as healthcare. The practical implication is that the design goal should not be to improve consistency with normative models of rationality, but to tune the representations guiding the muddling process to increase functional perspicacity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity
NASA Astrophysics Data System (ADS)
Chang, J. C.; Crain, K.
2015-12-01
Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
Enhancing Students' Communication Skills through Treffinger Teaching Model
ERIC Educational Resources Information Center
Alhaddad, Idrus; Kusumah, Yaya S.; Sabandar, Jozua; Dahlan, Jarnawi A.
2015-01-01
This research aims to investigate, compare, and describe the achievement and enhancement of students' mathematical communication skills (MCS). It based on the prior mathematical knowledge (PMK) category (high, medium and low) by using Treffinger models (TM) and conventional learning (CL). This research is an experimental study with the population…
Benchmarking and Modeling of a Conventional Mid-Size Car Using ALPHA (SAE Paper 2015-01-1140)
The Advanced Light-Duty Powertrain and Hybrid Analysis (ALPHA) modeling tool was created by EPA to estimate greenhouse gas (GHG) emissions of light-duty vehicles. ALPHA is a physics-based, forward-looking, full vehicle computer simulation capable of analyzing various vehicle type...
On the new metrics for IMRT QA verification.
Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo
2016-11-01
The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics. This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.
ERIC Educational Resources Information Center
Sahhyar; Nst, Febriani Hastini
2017-01-01
The purpose of this research was to analyze the physics cognitive competence and science process skill of students using scientific inquiry learning model based on conceptual change better than using conventional learning. The research type was quasi experiment and two group pretest-posttest designs were used in this study. The sample were Class…
Vortex Wakes of Conventional Aircraft
1975-05-01
Research Laboratories, Wright-Patterson Air Force Base , Ohio 45433, USA This work was prepared at the request of the Fluid Dynamics Panel of AGARD. THE...aerospace sciences relevant to strengthening the common defence posture; - Improving the co-operation among member nations in aerospace research and...two models have been developed to describe the inviscid structure of the vortex wake. The first model was due to Prandtl [10] and is based on the
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Experimental comparison of conventional and nonlinear model-based control of a mixing tank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeggblom, K.E.
1993-11-01
In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less
Ceramic matrix composite behavior -- Computational simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.
Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less
Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method
NASA Astrophysics Data System (ADS)
Mehl, S.
2012-12-01
Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.
The early maximum likelihood estimation model of audiovisual integration in speech perception.
Andersen, Tobias S
2015-05-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Fuzzy logic based robotic controller
NASA Technical Reports Server (NTRS)
Attia, F.; Upadhyaya, M.
1994-01-01
Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.
Sobral, Guilherme Caiado; Vedovello, Mário; Degan, Viviane Veroni; Santamaria, Milton
2014-01-01
OBJECTIVE: By means of a photoelastic model, this study analyzed the stress caused on conventional and self-ligating brackets with expanded arch wires. METHOD: Standard brackets were adhered to artificial teeth and a photoelastic model was prepared using the Interlandi 19/12 diagram as base. Successive activations were made with 0.014-in and 0.018-in rounded cross section Nickel-Titanium wires (NiTi) and 0.019 x 0.025-in rectangular stainless steel wires all of which made on 22/14 Interlandi diagram. The model was observed on a plane polariscope - in a dark field microscope configuration - and photographed at each exchange of wire. Then, they were replaced by self-ligating brackets and the process was repeated. Analysis was qualitative and observed stress location and pattern on both models analyzed. CONCLUSIONS: Results identified greater stress on the region of the apex of premolars in both analyzed models. Upon comparing the stress between models, a greater amount of stress was found in the model with conventional brackets in all of its wires. Therefore, the present pilot study revealed that alignment of wires in self-ligating brackets produced lower stress in periodontal tissues in expansive mechanics. PMID:25715719
CNN: a speaker recognition system using a cascaded neural network.
Zaki, M; Ghalwash, A; Elkouny, A A
1996-05-01
The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.
Modeling high signal-to-noise ratio in a novel silicon MEMS microphone with comb readout
NASA Astrophysics Data System (ADS)
Manz, Johannes; Dehe, Alfons; Schrag, Gabriele
2017-05-01
Strong competition within the consumer market urges the companies to constantly improve the quality of their devices. For silicon microphones excellent sound quality is the key feature in this respect which means that improving the signal-to-noise ratio (SNR), being strongly correlated with the sound quality is a major task to fulfill the growing demands of the market. MEMS microphones with conventional capacitive readout suffer from noise caused by viscous damping losses arising from perforations in the backplate [1]. Therefore, we conceived a novel microphone design based on capacitive read-out via comb structures, which is supposed to show a reduction in fluidic damping compared to conventional MEMS microphones. In order to evaluate the potential of the proposed design, we developed a fully energy-coupled, modular system-level model taking into account the mechanical motion, the slide film damping between the comb fingers, the acoustic impact of the package and the capacitive read-out. All submodels are physically based scaling with all relevant design parameters. We carried out noise analyses and due to the modular and physics-based character of the model, were able to discriminate the noise contributions of different parts of the microphone. This enables us to identify design variants of this concept which exhibit a SNR of up to 73 dB (A). This is superior to conventional and at least comparable to high-performance variants of the current state-of-the art MEMS microphones [2].
Experimental study of geotextile as plinth beam in a pile group-supported modeled building frame
NASA Astrophysics Data System (ADS)
Ravi Kumar Reddy, C.; Gunneswara Rao, T. D.
2017-12-01
This paper presents the experimental results of static vertical load tests on a model building frame with geotextile as plinth beam supported by pile groups embedded in cohesionless soil (sand). The experimental results have been compared with those obtained from the nonlinear FEA and conventional method of analysis. The results revealed that the conventional method of analysis gives a shear force of about 53%, bending moment at the top of the column about 17% and at the base of the column about 50-98% higher than that by the nonlinear FEA for the frame with geotextile as plinth beam.
Tang, Yadong; Huang, Boxin; Dong, Yuqin; Wang, Wenlong; Zheng, Xi; Zhou, Wei; Zhang, Kun; Du, Zhiyun
2017-10-01
In vitro cell-based assays are widely applied to evaluate anti-cancer drug efficacy. However, the conventional approaches are mostly based on two-dimensional (2D) culture systems, making it difficult to recapitulate the in vivo tumor scenario because of spatial limitations. Here, we develop an in vitro three-dimensional (3D) prostate tumor model based on a hyaluronic acid (HA)-alginate hybrid hydrogel to bridge the gap between in vitro and in vivo anticancer drug evaluations. In situ encapsulation of PCa cells was achieved by mixing HA and alginate aqueous solutions in the presence of cells and then crosslinking with calcium ions. Unlike in 2D culture, cells were found to aggregate into spheroids in a 3D matrix. The expression of epithelial to mesenchyme transition (EMT) biomarkers was found to be largely enhanced, indicating an increased invasion and metastasis potential in the hydrogel matrix. A significant up-regulation of proangiogenic growth factors (IL-8, VEGF) and matrix metalloproteinases (MMPs) was observed in 3D-cultured PCa cells. The results of anti-cancer drug evaluation suggested a higher drug tolerance within the 3D tumor model compared to conventional 2D-cultured cells. Finally, we found that the drug effect within the in vitro 3D cancer model based on HA-alginate matrix exhibited better predictability for in vivo drug efficacy.
An efficient temporal database design method based on EER
NASA Astrophysics Data System (ADS)
Liu, Zhi; Huang, Jiping; Miao, Hua
2007-12-01
Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Autonomous control systems - Architecture and fundamental issues
NASA Technical Reports Server (NTRS)
Antsaklis, P. J.; Passino, K. M.; Wang, S. J.
1988-01-01
A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).
Lee, Hyung-Min; Howell, Bryan; Grill, Warren M; Ghovanloo, Maysam
2018-05-01
The purpose of this study was to test the feasibility of using a switched-capacitor discharge stimulation (SCDS) system for electrical stimulation, and, subsequently, determine the overall energy saved compared to a conventional stimulator. We have constructed a computational model by pairing an image-based volume conductor model of the cat head with cable models of corticospinal tract (CST) axons and quantified the theoretical stimulation efficiency of rectangular and decaying exponential waveforms, produced by conventional and SCDS systems, respectively. Subsequently, the model predictions were tested in vivo by activating axons in the posterior internal capsule and recording evoked electromyography (EMG) in the contralateral upper arm muscles. Compared to rectangular waveforms, decaying exponential waveforms with time constants >500 μs were predicted to require 2%-4% less stimulus energy to activate directly models of CST axons and 0.4%-2% less stimulus energy to evoke EMG activity in vivo. Using the calculated wireless input energy of the stimulation system and the measured stimulus energies required to evoke EMG activity, we predict that an SCDS implantable pulse generator (IPG) will require 40% less input energy than a conventional IPG to activate target neural elements. A wireless SCDS IPG that is more energy efficient than a conventional IPG will reduce the size of an implant, require that less wireless energy be transmitted through the skin, and extend the lifetime of the battery in the external power transmitter.
Predicting species distributions from checklist data using site-occupancy models
Kery, M.; Gardner, B.; Monnerat, C.
2010-01-01
Aim: (1) To increase awareness of the challenges induced by imperfect detection, which is a fundamental issue in species distribution modelling; (2) to emphasize the value of replicate observations for species distribution modelling; and (3) to show how 'cheap' checklist data in faunal/floral databases may be used for the rigorous modelling of distributions by site-occupancy models. Location: Switzerland. Methods: We used checklist data collected by volunteers during 1999 and 2000 to analyse the distribution of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly in Switzerland. We used data from repeated visits to 1-ha pixels to derive 'detection histories' and apply site-occupancy models to estimate the 'true' species distribution, i.e. corrected for imperfect detection. We modelled blue hawker distribution as a function of elevation and year and its detection probability of elevation, year and season. Results: The best model contained cubic polynomial elevation effects for distribution and quadratic effects of elevation and season for detectability. We compared the site-occupancy model with a conventional distribution model based on a generalized linear model, which assumes perfect detectability (p = 1). The conventional distribution map looked very different from the distribution map obtained using site-occupancy models that accounted for the imperfect detection. The conventional model underestimated the species distribution by 60%, and the slope parameters of the occurrence-elevation relationship were also underestimated when assuming p = 1. Elevation was not only an important predictor of blue hawker occurrence, but also of the detection probability, with a bell-shaped relationship. Furthermore, detectability increased over the season. The average detection probability was estimated at only 0.19 per survey. Main conclusions: Conventional species distribution models do not model species distributions per se but rather the apparent distribution, i.e. an unknown proportion of species distributions. That unknown proportion is equivalent to detectability. Imperfect detection in conventional species distribution models yields underestimates of the extent of distributions and covariate effects that are biased towards zero. In addition, patterns in detectability will erroneously be ascribed to species distributions. In contrast, site-occupancy models applied to replicated detection/non-detection data offer a powerful framework for making inferences about species distributions corrected for imperfect detection. The use of 'cheap' checklist data greatly enhances the scope of applications of this useful class of models. ?? 2010 Blackwell Publishing Ltd.
Optimization model of conventional missile maneuvering route based on improved Floyd algorithm
NASA Astrophysics Data System (ADS)
Wu, Runping; Liu, Weidong
2018-04-01
Missile combat plays a crucial role in the victory of war under high-tech conditions. According to the characteristics of maneuver tasks of conventional missile units in combat operations, the factors influencing road maneuvering are analyzed. Based on road distance, road conflicts, launching device speed, position requirements, launch device deployment, Concealment and so on. The shortest time optimization model was built to discuss the situation of road conflict and the strategy of conflict resolution. The results suggest that in the process of solving road conflict, the effect of node waiting is better than detour to another way. In this study, we analyzed the deficiency of the traditional Floyd algorithm which may limit the optimal way of solving road conflict, and put forward the improved Floyd algorithm, meanwhile, we designed the algorithm flow which would be better than traditional Floyd algorithm. Finally, throgh a numerical example, the model and the algorithm were proved to be reliable and effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-10-01
Volume IV of the ISTUM documentation gives information on the individual technology specifications, but relates closely with Chapter II of Volume I. The emphasis in that chapter is on providing an overview of where each technology fits into the general-model logic. Volume IV presents the actual cost structure and specification of every technology modeled in ISTUM. The first chapter presents a general overview of the ISTUM technology data base. It includes an explanation of the data base printouts and how the separate-cost building blocks are combined to derive an aggregate-technology cost. The remaining chapters are devoted to documenting the specific-technologymore » cost specifications. Technologies included are: conventional technologies (boiler and non-boiler conventional technologies); fossil-energy technologies (atmospheric fluidized bed combustion, low Btu coal and medium Btu coal gasification); cogeneration (steam, machine drive, and electrolytic service sectors); and solar and geothermal technologies (solar steam, solar space heat, and geothermal steam technologies), and conservation technologies.« less
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Performance Model of Intercity Ground Passenger Transportation Systems
DOT National Transportation Integrated Search
1975-08-01
A preliminary examination of the problems associated with mixed-traffic operations - conventional freight and high speed passenger trains - is presented. Approaches based upon a modest upgrading of existing signal systems are described. Potential cos...
ERIC Educational Resources Information Center
Darabi, Aubteen; Nelson, David W.; Meeker, Richard; Liang, Xinya; Boulware, Wilma
2010-01-01
In a diagnostic problem solving operation of a computer-simulated chemical plant, chemical engineering students were randomly assigned to two groups: one studying product-oriented worked examples, the other practicing conventional problem solving. Effects of these instructional strategies on the progression of learners' mental models were examined…
Large-eddy simulation of turbulent flow with a surface-mounted two-dimensional obstacle
NASA Technical Reports Server (NTRS)
Yang, Kyung-Soo; Ferziger, Joel H.
1993-01-01
In this paper, we perform a large eddy simulation (LES) of turbulent flow in a channel containing a two-dimensional obstacle on one wall using a dynamic subgrid-scale model (DSGSM) at Re = 3210, based on bulk velocity above the obstacle and obstacle height; the wall layers are fully resolved. The low Re enables us to perform a DNS (Case 1) against which to validate the LES results. The LES with the DSGSM is designated Case 2. In addition, an LES with the conventional fixed model constant (Case 3) is conducted to allow identification of improvements due to the DSGSM. We also include LES at Re = 82,000 (Case 4) using conventional Smagorinsky subgrid-scale model and a wall-layer model. The results will be compared with the experiment of Dimaczek et al.
Development of a Lumped Element Circuit Model for Approximation of Dielectric Barrier Discharges
2011-08-01
dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond pulsed DBDs, which have been proposed...species for pulsed direct current (DC) dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond...momentum-based approaches. Given the fundamental differences between the novel pulsed discharge approach and the more conventional momentum-based
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Gong, Tong; Brew, Bronwyn; Sjölander, Arvid; Almqvist, Catarina
2017-07-01
Various epidemiological designs have been applied to investigate the causes and consequences of fetal growth restriction in register-based observational studies. This review seeks to provide an overview of several conventional designs, including cohort, case-control and more recently applied non-conventional designs such as family-based designs. We also discuss some practical points regarding the application and interpretation of family-based designs. Definitions of each design, the study population, the exposure and the outcome measures are briefly summarised. Examples of study designs are taken from the field of low birth-weight research for illustrative purposes. Also examined are relative advantages and disadvantages of each design in terms of assumptions, potential selection and information bias, confounding and generalisability. Kinship data linkage, statistical models and result interpretation are discussed specific to family-based designs. When all information is retrieved from registers, there is no evident preference of the case-control design over the cohort design to estimate odds ratios. All conventional designs included in the review are prone to bias, particularly due to residual confounding. Family-based designs are able to reduce such bias and strengthen causal inference. In the field of low birth-weight research, family-based designs have been able to confirm a negative association not confounded by genetic or shared environmental factors between low birth weight and the risk of asthma. We conclude that there is a broader need for family-based design in observational research as evidenced by the meaningful contributions to the understanding of the potential causal association between low birth weight and subsequent outcomes.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-02-20
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.
Adler, Philipp; Hugen, Thorsten; Wiewiora, Marzena; Kunz, Benno
2011-03-07
An unstructured model for an integrated fermentation/membrane extraction process for the production of the aroma compounds 2-phenylethanol and 2-phenylethylacetate by Kluyveromyces marxianus CBS 600 was developed. The extent to which this model, based only on data from the conventional fermentation and separation processes, provided an estimation of the integrated process was evaluated. The effect of product inhibition on specific growth rate and on biomass yield by both aroma compounds was approximated by multivariate regression. Simulations of the respective submodels for fermentation and the separation process matched well with experimental results. With respect to the in situ product removal (ISPR) process, the effect of reduced product inhibition due to product removal on specific growth rate and biomass yield was predicted adequately by the model simulations. Overall product yields were increased considerably in this process (4.0 g/L 2-PE+2-PEA vs. 1.4 g/L in conventional fermentation) and were even higher than predicted by the model. To describe the effect of product concentration on product formation itself, the model was extended using results from the conventional and the ISPR process, thus agreement between model and experimental data improved notably. Therefore, this model can be a useful tool for the development and optimization of an efficient integrated bioprocess. Copyright © 2010 Elsevier Inc. All rights reserved.
Reliability of emerging bonded interface materials for large-area attachments
Paret, Paul P.; DeVoto, Douglas J.; Narumanchi, Sreekant
2015-12-30
In this study, conventional thermal interface materials (TIMs), such as greases, gels, and phase change materials, pose bottlenecks to heat removal and have long caused reliability issues in automotive power electronics packages. Bonded interface materials (BIMs) with superior thermal performance have the potential to be a replacement to the conventional TIMs. However, due to coefficient of thermal expansion mismatches between different components in a package and resultant thermomechanical stresses, fractures or delamination could occur, causing serious reliability concerns. These defects manifest themselves in increased thermal resistance in the package. In this paper, the results of reliability evaluation of emerging BIMsmore » for large-area attachments in power electronics packaging are reported. Thermoplastic (polyamide) adhesive with embedded near-vertical-aligned carbon fibers, sintered silver, and conventional lead solder (Sn 63Pb 37) materials were bonded between 50.8 mm x 50.8 mm cross-sectional footprint silicon nitride substrates and copper base plate samples, and were subjected to accelerated thermal cycling until failure or 2500 cycles. Damage in the BIMs was monitored every 100 cycles by scanning acoustic microscopy. Thermoplastic with embedded carbon fibers performed the best with no defects, whereas sintered silver and lead solder failed at 2300 and 1400 thermal cycles, respectively. Besides thermal cycling, additional lead solder samples were subjected to thermal shock and thermal cycling with extended dwell periods. A finite element method (FEM)-based model was developed to simulate the behavior of lead solder under thermomechanical loading. Strain energy density per cycle results were calculated from the FEM simulations. A predictive lifetime model was formulated for lead solder by correlating strain energy density results extracted from modeling with cycles-to-failure obtained from experimental accelerated tests. A power-law-based approach was used to formulate the - redictive lifetime model.« less
Pearson-Stuttard, Jonathan; Guzman-Castillo, Maria; Penalvo, Jose L.; Rehm, Colin D.; Afshin, Ashkan; Danaei, Goodarz; Kypridemos, Chris; Gaziano, Tom; Mozaffarian, Dariush; Capewell, Simon; O’Flaherty, Martin
2016-01-01
Background Accurate forecasting of cardiovascular disease (CVD) mortality is crucial to guide policy and programming efforts. Prior forecasts have often not incorporated past trends in rates of reduction in CVD mortality. This creates uncertainties about future trends in CVD mortality and disparities. Methods and Results To forecast US CVD mortality and disparities to 2030, we developed a hierarchical Bayesian model to determine and incorporate prior age, period and cohort (APC) effects from 1979–2012, stratified by age, gender and race; which we combined with expected demographic shifts to 2030. Data sources included the National Vital Statistics System, SEER single year population estimates, and US Bureau of Statistics 2012 National Population projections. We projected coronary disease and stroke deaths to 2030, first based on constant APC effects at 2012 values, as most commonly done (conventional); and then using more rigorous projections incorporating expected trends in APC effects (trend-based). We primarily evaluated absolute mortality. The conventional model projected total coronary and stroke deaths by 2030 to increase by approximately 18% (67,000 additional coronary deaths/year) and 50% (64,000 additional stroke deaths/year). Conversely, the trend-based model projected that coronary mortality would fall by 2030 by approximately 27% (79,000 fewer deaths/year); and stroke mortality would remain unchanged (200 fewer deaths/year). Health disparities will be improved in stroke deaths, but not coronary deaths. Conclusions After accounting for prior mortality trends and expected demographic shifts, total US coronary deaths are expected to decline, while stroke mortality will remain relatively constant. Health disparities in stroke, but not coronary, deaths will be improved but not eliminated. These APC approaches offer more plausible predictions than conventional estimates. PMID:26846769
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Sibbitt, Wilmer; Sibbitt, Randy R; Michael, Adrian A; Fu, Druce I; Draeger, Hilda T; Twining, Jon M; Bankhurst, Arthur D
2006-04-01
To evaluate physician control of needle and syringe during aspiration-injection syringe procedures by comparing the new reciprocating procedure syringe to a traditional conventional syringe. Twenty-six physicians were tested for their individual ability to control the reciprocating and conventional syringes in typical aspiration-injection procedures using a novel quantitative needle-based displacement procedure model. Subsequently, the physicians performed 48 clinical aspiration-injection (arthrocentesis) procedures on 32 subjects randomized to the reciprocating or conventional syringes. Clinical outcomes included procedure time, patient pain, and operator satisfaction. Multivariate modeling methods were used to determine the experimental variables in the syringe control model most predictive of clinical outcome measures. In the model system, the reciprocating syringe significantly improved physician control of the syringe and needle, with a 66% reduction in unintended forward penetration (p < 0.001) and a 68% reduction in unintended retraction (p < 0.001). In clinical arthrocentesis, improvements were also noted: 30% reduction in procedure time (p < 0.03), 57% reduction in patient pain (p < 0.001), and a 79% increase in physician satisfaction (p < 0.001). The variables in the experimental system--unintended forward penetration, unintended retraction, and operator satisfaction--independently predicted the outcomes of procedure time, patient pain, and physician satisfaction in the clinical study (p < or = 0.001). The reciprocating syringe reduces procedure time and patient pain and improves operator satisfaction with the procedure syringe. The reciprocating syringe improves physician performance in both the validated quantitative needle-based displacement model and in real aspiration-injection syringe procedures, including arthrocentesis.
Trend-Residual Dual Modeling for Detection of Outliers in Low-Cost GPS Trajectories.
Chen, Xiaojian; Cui, Tingting; Fu, Jianhong; Peng, Jianwei; Shan, Jie
2016-12-01
Low-cost GPS (receiver) has become a ubiquitous and integral part of our daily life. Despite noticeable advantages such as being cheap, small, light, and easy to use, its limited positioning accuracy devalues and hampers its wide applications for reliable mapping and analysis. Two conventional techniques to remove outliers in a GPS trajectory are thresholding and Kalman-based methods, which are difficult in selecting appropriate thresholds and modeling the trajectories. Moreover, they are insensitive to medium and small outliers, especially for low-sample-rate trajectories. This paper proposes a model-based GPS trajectory cleaner. Rather than examining speed and acceleration or assuming a pre-determined trajectory model, we first use cubic smooth spline to adaptively model the trend of the trajectory. The residuals, i.e., the differences between the trend and GPS measurements, are then further modeled by time series method. Outliers are detected by scoring the residuals at every GPS trajectory point. Comparing to the conventional procedures, the trend-residual dual modeling approach has the following features: (a) it is able to model trajectories and detect outliers adaptively; (b) only one critical value for outlier scores needs to be set; (c) it is able to robustly detect unapparent outliers; and (d) it is effective in cleaning outliers for GPS trajectories with low sample rates. Tests are carried out on three real-world GPS trajectories datasets. The evaluation demonstrates an average of 9.27 times better performance in outlier detection for GPS trajectories than thresholding and Kalman-based techniques.
Heuts, Samuel; Maessen, Jos G; Sardari Nia, Peyman
2016-05-01
With the emergence of a new concept aimed at individualization of patient care, the focus will shift from whether a minimally invasive procedure is better than conventional treatment, to the question of which patients will benefit most from which technique? The superiority of minimally invasive valve surgery (MIVS) has not yet been proved. We believe that through better patient selection advantages of this technique can become more pronounced. In our current study, we evaluate the feasibility of 3D computed tomography (CT) imaging reconstruction in the preoperative planning of patients referred for MIVS. We retrospectively analysed all consecutive patients who were referred for minimally invasive mitral valve surgery (MIMVS) and minimally invasive aortic valve replacement (MIAVR) to a single surgeon in a tertiary referral centre for MIVS between March 2014 and 2015. Prospective preoperative planning was done for all patients and was based on evaluations by a multidisciplinary heart-team, an echocardiography, conventional CT images and 3D CT reconstruction models. A total of 39 patients were included in our study; 16 for mitral valve surgery (MVS) and 23 patients for aortic valve replacement (AVR). Eleven patients (69%) within the MVS group underwent MIMVS. Five patients (31%) underwent conventional MVS. Findings leading to exclusion for MIMVS were a tortuous or slender femoro-iliac tract, calcification of the aortic bifurcation, aortic elongation and pericardial calcifications. Furthermore, 2 patients had a change of operative strategy based on preoperative planning. Seventeen (74%) patients in the AVR group underwent MIAVR. Six patients (26%) underwent conventional AVR. Indications for conventional AVR instead of MIAVR were an elongated ascending aorta, ascending aortic calcification and ascending aortic dilatation. One patient (6%) in the MIAVR group was converted to a sternotomy due to excessive intraoperative bleeding. Two mortalities were reported during conventional MVS. There were no mortalities reported in the MIMVS, MIAVR or conventional AVR group. Preoperative planning of minimally invasive left-sided valve surgery with 3D CT reconstruction models is a useful and feasible method to determine operative strategy and exclude patients ineligible for a minimally invasive approach, thus potentially preventing complications. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Comparing the engineering program feeders from SiF and convention models
NASA Astrophysics Data System (ADS)
Roongruangsri, Warawaran; Moonpa, Niwat; Vuthijumnonk, Janyawat; Sangsuwan, Kampanart
2018-01-01
This research aims to compare the relationship between two types of engineering program feeder models within the technical education systems of Rajamangala University of Technology Lanna (RMUTL), Chiangmai, Thailand. To illustrate, the paper refers to two typologies of feeder models, which are the convention and the school in factory (SiF) models. The new SiF model is developed through a collaborative educational process between the sectors of industry, government and academia, using work-integrated learning. The research methodology were use to compared features of the the SiF model with conventional models in terms of learning outcome, funding budget for the study, the advantages and disadvantages from the point of view of students, professors, the university, government and industrial partners. The results of this research indicate that the developed SiF feeder model is the most pertinent ones as it meet the requirements of the university, the government and the industry. The SiF feeder model showed the ability to yield positive learning outcomes with low expenditures per student for both the family and the university. In parallel, the sharing of knowledge between university and industry became increasingly important in the process, which resulted in the improvement of industrial skills for professors and an increase in industrial based research for the university. The SiF feeder model meets its demand of public policy in supporting a skilled workforce for the industry, which could be an effective tool for the triple helix educational model of Thailand.
Kalf-Scholte, Sonja M; van Amerongen, Willem E; Smith, Albert J E; van Haastrecht, Harry J A
2003-01-01
This study compares the quality of class I restorations made with the atraumatic restorative treatment (ART) technique and conventional class I amalgam restorations. The study was carried out among secondary school students in Mzuzu, Malawi. First-year students in 1987 who needed at least two class I restorations were selected. Based on a split-mouth design, each participant received both ART and conventional restorations. The 89 pairs of class I cavities were divided randomly into two groups, since two different cermet ionomer cement (CIC) filling materials were used. Impressions of the restorations and subsequent models were made shortly after restoration, after six months, one year, two years, and three years. The quality of the restorations was determined on the models following the US Public Health Service criteria. Bulk fracture, contour, marginal integrity, and surface texture of the restorations were recorded and evaluated separately. Survival rates were determined by the resultant score of all criteria. Though conventional amalgam restorations performed better on all criteria, this difference was significant only for the contour criterion. The survival rates of ART restorations after three years (81.0%) were lower than those of amalgam restorations (90.4%) (P=.067). The quality of ART class I restorations is competitive with that of conventional amalgam restorations.
Estimating and validating harvesting system production through computer simulation
John E. Baumgras; Curt C. Hassler; Chris B. LeDoux
1993-01-01
A Ground Based Harvesting System Simulation model (GB-SIM) has been developed to estimate stump-to-truck production rates and multiproduct yields for conventional ground-based timber harvesting systems in Appalachian hardwood stands. Simulation results reflect inputs that define harvest site and timber stand attributes, wood utilization options, and key attributes of...
Strategic Industrial Alliances in Paper Industry: XML- vs Ontology-Based Integration Platforms
ERIC Educational Resources Information Center
Naumenko, Anton; Nikitin, Sergiy; Terziyan, Vagan; Zharko, Andriy
2005-01-01
Purpose: To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology-driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards. Design/methodology/approach: A comparative analysis of the two latest and the most obvious…
Conceptual Complexity and the Bias/Variance Tradeoff
ERIC Educational Resources Information Center
Briscoe, Erica; Feldman, Jacob
2011-01-01
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…
NASA Astrophysics Data System (ADS)
Madani, K.; Dinar, A.
2013-12-01
Tragedy of the commons is generally recognized as one of the possible destinies for common pool resources (CPRs). To avoid the tragedy of the commons and prolonging the life of CPRs, users may show different behavioral characteristics and use different rationales for CPR planning and management. Furthermore, regulators may adopt different strategies for sustainable management of CPRs. The effectiveness of different regulatory exogenous management institutions cannot be evaluated through conventional CPR models since they assume that either users base their behavior on individual rationality and adopt a selfish behavior (Nash behavior), or that the users seek the system's optimal solution without giving priority to their own interests. Therefore, conventional models fail to reliably predict the outcome of CPR problems in which parties may have a range of behavioral characteristics, putting them somewhere in between the two types of behaviors traditionally considered. This work examines the effectiveness of different regulatory exogenous CPR management institutions through a user-based model (as opposed to a system-based model). The new modeling framework allows for consideration of sensitivity of the results to different behavioral characteristics of interacting CPR users. The suggested modeling approach is applied to a benchmark groundwater management problem. Results indicate that some well-known exogenous management institutions (e.g. taxing) are ineffective in sustainable management of CPRs in most cases. Bankruptcy-based management can be helpful, but determination of the fair level of cutbacks remains challenging under this type of institution. Furthermore, some bankruptcy rules such as the Constrained Equal Award (CEA) method are more beneficial to wealthier users, failing to establish social justice. Quota-based and CPR status-based management perform as the most promising and robust regulatory exogenous institutions in prolonging the CPR's life and increasing the long-term benefits to its users.
NASA Astrophysics Data System (ADS)
Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.
2017-08-01
The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Improved fuzzy PID controller design using predictive functional control structure.
Wang, Yuzhong; Jin, Qibing; Zhang, Ridong
2017-11-01
In conventional PID scheme, the ensemble control performance may be unsatisfactory due to limited degrees of freedom under various kinds of uncertainty. To overcome this disadvantage, a novel PID control method that inherits the advantages of fuzzy PID control and the predictive functional control (PFC) is presented and further verified on the temperature model of a coke furnace. Based on the framework of PFC, the prediction of the future process behavior is first obtained using the current process input signal. Then, the fuzzy PID control based on the multi-step prediction is introduced to acquire the optimal control law. Finally, the case study on a temperature model of a coke furnace shows the effectiveness of the fuzzy PID control scheme when compared with conventional PID control and fuzzy self-adaptive PID control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2017-01-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
2016-07-27
is a common requirement for aircraft, rockets , and hypersonic vehicles. The Aerospace Fuels Quality Test and Model Development (AFQTMoDev) project...was initiated to mature fuel quality assurance practices for rocket grade kerosene, thereby ensuring operational readiness of conventional and...and reliability, is a common requirement for aircraft, rockets , and hypersonic vehicles. The Aerospace Fuels Quality Test and Model Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Kang, S; Eom, J
Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less
Iorio, Alfonso; Krishnan, Sangeeta; Myrén, Karl-Johan; Lethagen, Stefan; McCormick, Nora; Yermakov, Sander; Karner, Paul
2017-04-01
Continuous prophylaxis for patients with hemophilia B requires frequent injections that are burdensome and that may lead to suboptimal adherence and outcomes. Hence, therapies requiring less-frequent injections are needed. In the absence of head-to-head comparisons, this study compared the first extended-half-life-recombinant factor IX (rFIX) product-recombinant factor IX Fc fusion protein (rFIXFc)-with conventional rFIX products based on annualized bleed rates (ABRs) and factor consumption reported in studies of continuous prophylaxis. This study compared ABRs and weekly factor consumption rates in clinical studies of continuous prophylaxis treatment with rFIXFc and conventional rFIX products (identified by systematic literature review) in previously-treated adolescents and adults with moderate-to-severe hemophilia B. Meta-analysis was used to pool ABRs reported for conventional rFIX products for comparison. Comparisons of weekly factor consumption were based on the mean, reported or estimated from the mean dose per injection. Five conventional rFIX studies (injections 1 to >3 times/week) met the criteria for comparison with once-weekly rFIXFc reported by the B-LONG study. The pooled mean ABR for conventional rFIX was slightly higher than but comparable to rFIXFc (difference=0.71; p = 0.210). Weekly factor consumption was significantly lower with rFIXFc than in conventional rFIX studies (difference in means = 42.8-74.5 IU/kg/week [93-161%], p < 0.001). Comparisons of clinical study results suggest weekly injections with rFIXFc result in similar bleeding rates and significantly lower weekly factor consumption compared with more-frequently-injected conventional rFIX products. The real-world effectiveness of rFIXFc may be higher based on results from a model of the impact of simulated differences in adherence.
A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Horst; Laurischkat, Roman; Zhu Junhong
One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less
Methodological Developments in Geophysical Assimilation Modeling
NASA Astrophysics Data System (ADS)
Christakos, George
2005-06-01
This work presents recent methodological developments in geophysical assimilation research. We revisit the meaning of the term "solution" of a mathematical model representing a geophysical system, and we examine its operational formulations. We argue that an assimilation solution based on epistemic cognition (which assumes that the model describes incomplete knowledge about nature and focuses on conceptual mechanisms of scientific thinking) could lead to more realistic representations of the geophysical situation than a conventional ontologic assimilation solution (which assumes that the model describes nature as is and focuses on form manipulations). Conceptually, the two approaches are fundamentally different. Unlike the reasoning structure of conventional assimilation modeling that is based mainly on ad hoc technical schemes, the epistemic cognition approach is based on teleologic criteria and stochastic adaptation principles. In this way some key ideas are introduced that could open new areas of geophysical assimilation to detailed understanding in an integrated manner. A knowledge synthesis framework can provide the rational means for assimilating a variety of knowledge bases (general and site specific) that are relevant to the geophysical system of interest. Epistemic cognition-based assimilation techniques can produce a realistic representation of the geophysical system, provide a rigorous assessment of the uncertainty sources, and generate informative predictions across space-time. The mathematics of epistemic assimilation involves a powerful and versatile spatiotemporal random field theory that imposes no restriction on the shape of the probability distributions or the form of the predictors (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated) and accounts rigorously for the uncertainty features of the geophysical system. In the epistemic cognition context the assimilation concept may be used to investigate critical issues related to knowledge reliability, such as uncertainty due to model structure error (conceptual uncertainty).
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1975-01-01
Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.
NASA Astrophysics Data System (ADS)
Sarif; Kurauchi, Shinya; Yoshii, Toshio
2017-06-01
In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.
Saadati, Farzaneh; Ahmad Tarmizi, Rohani
2015-01-01
Because students’ ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is ‘value added’ because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students’ problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students. PMID:26132553
Kim, Wooseong; Hendricks, Gabriel Lambert; Lee, Kiho; Mylonakis, Eleftherios
2017-06-01
The emergence of antibiotic-resistant and -tolerant bacteria is a major threat to human health. Although efforts for drug discovery are ongoing, conventional bacteria-centered screening strategies have thus far failed to yield new classes of effective antibiotics. Therefore, new paradigms for discovering novel antibiotics are of critical importance. Caenorhabditis elegans, a model organism used for in vivo, offers a promising solution for identification of anti-infective compounds. Areas covered: This review examines the advantages of C. elegans-based high-throughput screening over conventional, bacteria-centered in vitro screens. It discusses major anti-infective compounds identified from large-scale C. elegans-based screens and presents the first clinically-approved drugs, then known bioactive compounds, and finally novel small molecules. Expert opinion: There are clear advantages of using a C. elegans-infection based screening method. A C. elegans-based screen produces an enriched pool of non-toxic, efficacious, potential anti-infectives, covering: conventional antimicrobial agents, immunomodulators, and anti-virulence agents. Although C. elegans-based screens do not denote the mode of action of hit compounds, this can be elucidated in secondary studies by comparing the results to target-based screens, or conducting subsequent target-based screens, including the genetic knock-down of host or bacterial genes.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
A new algorithm for modeling friction in dynamic mechanical systems
NASA Technical Reports Server (NTRS)
Hill, R. E.
1988-01-01
A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.
Composite Structure Modeling and Analysis of Advanced Aircraft Fuselage Concepts
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek; Sorokach, Michael R.
2015-01-01
NASA Environmentally Responsible Aviation (ERA) project and the Boeing Company are collabrating to advance the unitized damage arresting composite airframe technology with application to the Hybrid-Wing-Body (HWB) aircraft. The testing of a HWB fuselage section with Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) construction is presently being conducted at NASA Langley. Based on lessons learned from previous HWB structural design studies, improved finite-element models (FEM) of the HWB multi-bay and bulkhead assembly are developed to evaluate the performance of the PRSEUS construction. In order to assess the comparative weight reduction benefits of the PRSEUS technology, conventional cylindrical skin-stringer-frame models of a cylindrical and a double-bubble section fuselage concepts are developed. Stress analysis with design cabin-pressure load and scenario based case studies are conducted for design improvement in each case. Alternate analysis with stitched composite hat-stringers and C-frames are also presented, in addition to the foam-core sandwich frame and pultruded rod-stringer construction. The FEM structural stress, strain and weights are computed and compared for relative weight/strength benefit assessment. The structural analysis and specific weight comparison of these stitched composite advanced aircraft fuselage concepts demonstrated that the pressurized HWB fuselage section assembly can be structurally as efficient as the conventional cylindrical fuselage section with composite stringer-frame and PRSEUS construction, and significantly better than the conventional aluminum construction and the double-bubble section concept.
Gonda, T; Ikebe, K; Ono, T; Nokubi, T
2004-10-01
Recently, a newly developed magnetic attachment with stress breaker was used in retentive components in overdentures. Excessive lateral stress has a more harmful effect on natural teeth than axial stress, and the magnetic attachment with stress breaker is expected to reduce lateral forces on abutment teeth and protect it teeth from excessive stress. However, the properties of this retainer have not yet been determined experimentally. This study compares the lateral forces on abutment teeth for three retainers under loading on the denture base in a model study. A mandibular simulation model is constructed to measure lateral stress. Three types of retentive devices are attached to the canine root. These devices include the conventional root coping, the conventional magnetic attachment and the new magnetic attachment with stress breaker. For each retentive device, load is generated on the occlusal table of the model overdenture, and the lateral stress on the canine root and the displacement of the overdenture measured. The magnetic attachment with stress breaker does not displace the denture and exhibits lower lateral stress in the canine root than conventional root coping and magnetic attachments.
Synchronous response modelling and control of an annular momentum control device
NASA Astrophysics Data System (ADS)
Hockney, Richard; Johnson, Bruce G.; Misovec, Kathleen
1988-08-01
Research on the synchronous response modelling and control of an advanced Annular Momentun Control Device (AMCD) used to control the attitude of a spacecraft is described. For the flexible rotor AMCD, two sources of synchronous vibrations were identified. One source, which corresponds to the mass unbalance problem of rigid rotors suspended in conventional bearings, is caused by measurement errors of the rotor center of mass position. The other sources of synchronous vibrations is misalignment between the hub and flywheel masses of the AMCD. Four different control algorithms were examined. These were lead-lag compensators that mimic conventional bearing dynamics, tracking notch filters used in the feedback loop, tracking differential-notch filters, and model-based compensators. The tracking differential-notch filters were shown to have a number of advantages over more conventional approaches for both rigid-body rotor applications and flexible rotor applications such as the AMCD. Hardware implementation schemes for the tracking differential-notch filter were investigated. A simple design was developed that can be implemented with analog multipliers and low bandwidth, digital hardware.
Yang, Yu-Chiao; Wei, Ming-Chi
2018-06-30
This study compared the use of ultrasound-assisted supercritical CO 2 (USC-CO 2 ) extraction to obtain apigenin-rich extracts from Scutellaria barbata D. Don with that of conventional supercritical CO 2 (SC-CO 2 ) extraction and heat-reflux extraction (HRE), conducted in parallel. This green procedure yielded 20.1% and 31.6% more apigenin than conventional SC-CO 2 extraction and HRE, respectively. Moreover, the extraction time required by the USC-CO 2 procedure, which used milder conditions, was approximately 1.9 times and 2.4 times shorter than that required by conventional SC-CO 2 extraction and HRE, respectively. Furthermore, the theoretical solubility of apigenin in the supercritical fluid system was obtained from the USC-CO 2 dynamic extraction curves and was in good agreement with the calculated values for the three empirical density-based models. The second-order kinetics model was further applied to evaluate the kinetics of USC-CO 2 extraction. The results demonstrated that the selected model allowed the evaluation of the extraction rate and extent of USC-CO 2 extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Synchronous response modelling and control of an annular momentum control device
NASA Technical Reports Server (NTRS)
Hockney, Richard; Johnson, Bruce G.; Misovec, Kathleen
1988-01-01
Research on the synchronous response modelling and control of an advanced Annular Momentun Control Device (AMCD) used to control the attitude of a spacecraft is described. For the flexible rotor AMCD, two sources of synchronous vibrations were identified. One source, which corresponds to the mass unbalance problem of rigid rotors suspended in conventional bearings, is caused by measurement errors of the rotor center of mass position. The other sources of synchronous vibrations is misalignment between the hub and flywheel masses of the AMCD. Four different control algorithms were examined. These were lead-lag compensators that mimic conventional bearing dynamics, tracking notch filters used in the feedback loop, tracking differential-notch filters, and model-based compensators. The tracking differential-notch filters were shown to have a number of advantages over more conventional approaches for both rigid-body rotor applications and flexible rotor applications such as the AMCD. Hardware implementation schemes for the tracking differential-notch filter were investigated. A simple design was developed that can be implemented with analog multipliers and low bandwidth, digital hardware.
2014-01-01
Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme. PMID:24744773
ERIC Educational Resources Information Center
Lu, Yi
2016-01-01
To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…
Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying
2016-04-01
The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bougherara, Salim; Golea, Amar; Benchouia, M. Toufik
2018-05-01
This paper is addressed to a comparative study of the vector control of a three phase induction motor based on two mathematical models. The first one is the conventional model based on the assumptions that the saturation and the iron losses are neglected; the second model fully accounts for both the fundamental iron loss and main flux saturation with and without compensation. A rotor resistance identifier is developed, so the compensation of its variation is achieved. The induction motor should be fed through a three levels inverter. The simulation results show the performances of the vector control based on the both models.
High pressure common rail injection system modeling and control.
Wang, H P; Zheng, D; Tian, Y
2016-07-01
In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Energy Productivity of the High Velocity Algae Raceway Integrated Design (ARID-HV)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attalah, Said; Waller, Peter M.; Khawam, George
The original Algae Raceway Integrated Design (ARID) raceway was an effective method to increase algae culture temperature in open raceways. However, the energy input was high and flow mixing was poor. Thus, the High Velocity Algae Raceway Integrated Design (ARID-HV) raceway was developed to reduce energy input requirements and improve flow mixing in a serpentine flow path. A prototype ARID-HV system was installed in Tucson, Arizona. Based on algae growth simulation and hydraulic analysis, an optimal ARID-HV raceway was designed, and the electrical energy input requirement (kWh ha-1 d-1) was calculated. An algae growth model was used to compare themore » productivity of ARIDHV and conventional raceways. The model uses a pond surface energy balance to calculate water temperature as a function of environmental parameters. Algae growth and biomass loss are calculated based on rate constants during day and night, respectively. A 10 year simulation of DOE strain 1412 (Chlorella sorokiniana) showed that the ARID-HV raceway had significantly higher production than a conventional raceway for all months of the year in Tucson, Arizona. It should be noted that this difference is species and climate specific and is not observed in other climates and with other algae species. The algae growth model results and electrical energy input evaluation were used to compare the energy productivity (algae production rate/energy input) of the ARID-HV and conventional raceways for Chlorella sorokiniana in Tucson, Arizona. The energy productivity of the ARID-HV raceway was significantly greater than the energy productivity of a conventional raceway for all months of the year.« less
Kovanis, Michail; Trinquart, Ludovic; Ravaud, Philippe; Porcher, Raphaël
2017-01-01
The debate on whether the peer-review system is in crisis has been heated recently. A variety of alternative systems have been proposed to improve the system and make it sustainable. However, we lack sufficient evidence and data related to these issues. Here we used a previously developed agent-based model of the scientific publication and peer-review system calibrated with empirical data to compare the efficiency of five alternative peer-review systems with the conventional system. We modelled two systems of immediate publication, with and without online reviews (crowdsourcing), a system with only one round of reviews and revisions allowed (re-review opt-out) and two review-sharing systems in which rejected manuscripts are resubmitted along with their past reviews to any other journal (portable) or to only those of the same publisher but of lower impact factor (cascade). The review-sharing systems outperformed or matched the performance of the conventional one in all peer-review efficiency, reviewer effort and scientific dissemination metrics we used. The systems especially showed a large decrease in total time of the peer-review process and total time devoted by reviewers to complete all reports in a year. The two systems with immediate publication released more scientific information than the conventional one but provided almost no other benefit. Re-review opt-out decreased the time reviewers devoted to peer review but had lower performance on screening papers that should not be published and relative increase in intrinsic quality of papers due to peer review than the conventional system. Sensitivity analyses showed consistent findings to those from our main simulations. We recommend prioritizing a system of review-sharing to create a sustainable scientific publication and peer-review system.
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bao, Wei; Hu, Frank B.; Rong, Shuang; Rong, Ying; Bowers, Katherine; Schisterman, Enrique F.; Liu, Liegang; Zhang, Cuilin
2013-01-01
This study aimed to evaluate the predictive performance of genetic risk models based on risk loci identified and/or confirmed in genome-wide association studies for type 2 diabetes mellitus. A systematic literature search was conducted in the PubMed/MEDLINE and EMBASE databases through April 13, 2012, and published data relevant to the prediction of type 2 diabetes based on genome-wide association marker–based risk models (GRMs) were included. Of the 1,234 potentially relevant articles, 21 articles representing 23 studies were eligible for inclusion. The median area under the receiver operating characteristic curve (AUC) among eligible studies was 0.60 (range, 0.55–0.68), which did not differ appreciably by study design, sample size, participants’ race/ethnicity, or the number of genetic markers included in the GRMs. In addition, the AUCs for type 2 diabetes did not improve appreciably with the addition of genetic markers into conventional risk factor–based models (median AUC, 0.79 (range, 0.63–0.91) vs. median AUC, 0.78 (range, 0.63–0.90), respectively). A limited number of included studies used reclassification measures and yielded inconsistent results. In conclusion, GRMs showed a low predictive performance for risk of type 2 diabetes, irrespective of study design, participants’ race/ethnicity, and the number of genetic markers included. Moreover, the addition of genome-wide association markers into conventional risk models produced little improvement in predictive performance. PMID:24008910
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
models exploit the immediate past only. To extract information from further back in the document’s history , I use trigger pairs as the basic information...9 2.2 Context-Free Estimation (Unigram) ...... .................... 12 2.3 Short-Term History (Conventional N-gram...12 2.4 Short-term Class History (Class-Based N-gram) ................... 14 2.5 Intermediate Distance ........ ........................... 16
The Emergence of Open-Source Software in North America
ERIC Educational Resources Information Center
Pan, Guohua; Bonk, Curtis J.
2007-01-01
Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…
Fuzzy classifier based support vector regression framework for Poisson ratio determination
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa
2013-09-01
Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.
Superpixel-based segmentation of glottal area from videolaryngoscopy images
NASA Astrophysics Data System (ADS)
Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail
2017-11-01
Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken
This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology,more » comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)« less
A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo
A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less
A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter
Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo; ...
2017-12-25
A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Ma, Liang; Wang, Bin
2018-01-01
In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.
NASA Astrophysics Data System (ADS)
Kurnia, H.; Noerhadi, N. A. I.
2017-08-01
Three-dimensional digital study models were introduced following advances in digital technology. This study was carried out to assess the reliability of digital study models scanned by a laser scanning device newly assembled. The aim of this study was to compare the digital study models and conventional models. Twelve sets of dental impressions were taken from patients with mild-to-moderate crowding. The impressions were taken twice, one with alginate and the other with polyvinylsiloxane. The alginate impressions were made into conventional models, and the polyvinylsiloxane impressions were scanned to produce digital models. The mesiodistal tooth width and Little’s irregularity index (LII) were measured manually with digital calipers on the conventional models and digitally on the digital study models. Bolton analysis was performed on each study models. Each method was carried out twice to check for intra-observer variability. The reproducibility (comparison of the methods) was assessed using independent-sample t-tests. The mesiodistal tooth width between conventional and digital models did not significantly differ (p > 0.05). Independent-sample t-tests did not identify statistically significant differences for Bolton analysis and LII (p = 0.603 for Bolton and p = 0894 for LII). The measurements of the digital study models are as accurate as those of the conventional models.
Kin Wong, Kenny; Chiu, Rose; Tang, Betty; Mak, Donald; Liu, Joanne; Chiu, Siu Ning
2008-01-01
Supported employment is an evidence-based practice that has proved to be consistently more effective than conventional vocational rehabilitation in helping people with severe mental illness find and sustain competitive employment. Most research on the effectiveness of supported employment comes from the United States. This study examined the effectiveness and applicability of a supported employment program based on the individual placement and support model in a Hong Kong setting. Ninety-two unemployed individuals with long-term mental illness who desired competitive employment were randomly assigned to either a supported employment program or a conventional vocational rehabilitation program and followed up for 18 months. Both vocational and nonvocational outcomes were measured. Over the 18-month study period, compared with participants in the conventional vocational rehabilitation program, those in the supported employment group were more likely to work competitively (70% versus 29%; odds ratio=5.63, 95% confidence interval=2.28-13.84), held a greater number of competitive jobs, earned more income, worked more days, and sustained longer job tenures. Repeated-measures analysis of variance found no substantive differences between participants in the two groups and no significant change from baseline over time for psychiatric symptoms and self-perceived quality of life. Consistent with previous research findings in the United States, the supported employment program was more effective than the conventional vocational rehabilitation program in helping individuals with long-term mental illness find and sustain competitive employment in a Hong Kong setting. The supported employment program based on the individual placement and support model can thus be recommended for wider use in local mental health practice.
NASA Astrophysics Data System (ADS)
Saleh, H.; Suryadi, D.; Dahlan, J. A.
2018-01-01
The aim of this research was to find out whether 7E learning cycle under hypnoteaching model can enhance students’ mathematical problem-solving skill. This research was quasi-experimental study. The design of this study was pretest-posttest control group design. There were two groups of sample used in the study. The experimental group was given 7E learning cycle under hypnoteaching model, while the control group was given conventional model. The population of this study was the student of mathematics education program at one university in Tangerang. The statistical analysis used to test the hypothesis of this study were t-test and Mann-Whitney U. The result of this study show that: (1) The students’ achievement of mathematical problem solving skill who obtained 7E learning cycle under hypnoteaching model are higher than the students who obtained conventional model; (2) There are differences in the students’ enhancement of mathematical problem-solving skill based on students’ prior mathematical knowledge (PMK) category (high, middle, and low).
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
NASA Astrophysics Data System (ADS)
Jain, Rishabh
In this thesis, the line protection elements and their supervisory elements are analyzed in context of Type 3 (Doubly Fed Induction Generator based) grid integrated wind turbine systems. The underlying converter and controller design algorithms and topologies are discussed. A detailed controller for the Type 3 wind turbine system is designed and integrated to the grid using the RTDS. An alternative to the conventional PLL for tracking of rotor frequency is designed and implemented. A comparative analysis of the performance of an averaged model and the corresponding switching model is presented. After completing the WT model design, the averaged model is used to model an aggregate 10-generator equivalent model tied to a 230kV grid via a 22kV collector. This model is a great asset to understand dynamics, and the unfaulted and faulted behavior of aggregated and single-turbine Type 3 WT systems. The model is then utilized to analyze the response of conventional protection schemes (Line current Differential and Mho Distance elements) and their respective supervisory elements of modern commercial protection relays in real time by hardware-in-the-loop simulation using the RTDS. Differences in the behavior of these elements compared to conventional power systems is noted. Fault are analyzed from the relay's perspective and the reasons for the observed behavior are presented. Challenges associated with sequence components and relay sensitivity are discussed and alternate practices to circumvent these issues are recommended.
NASA Astrophysics Data System (ADS)
Calitri, Francesca; Necpalova, Magdalena; Lee, Juhwan; Zaccone, Claudio; Spiess, Ernst; Herrera, Juan; Six, Johan
2016-04-01
Organic cropping systems have been promoted as a sustainable alternative to minimize the environmental impacts of conventional practices. Relatively little is known about the potential to reduce NO3-N leaching through the large-scale adoption of organic practices. Moreover, the potential to mitigate NO3-N leaching and thus the N pollution under future climate change through organic farming remain unknown and highly uncertain. Here, we compared regional NO3-N leaching from organic and conventional cropping systems in Switzerland using a terrestrial biogeochemical process-based model DayCent. The objectives of this study are 1) to calibrate and evaluate the model for NO3-N leaching measured under various management practices from three experiments at two sites in Switzerland; 2) to estimate regional NO3-N leaching patterns and their spatial uncertainty in conventional and organic cropping systems (with and without cover crops) for future climate change scenario A1B; 3) to explore the sensitivity of NO3-N leaching to changes in soil and climate variables; and 4) to assess the nitrogen use efficiency for conventional and organic cropping systems with and without cover crops under climate change. The data for model calibration/evaluation were derived from field experiments conducted in Liebefeld (canton Bern) and Eschikon (canton Zürich). These experiments evaluated effects of various cover crops and N fertilizer inputs on NO3-N leaching. The preliminary results suggest that the model was able to explain 50 to 83% of the inter-annual variability in the measured soil drainage (RMSE from 12.32 to 16.89 cm y-1). The annual NO3-N leaching was also simulated satisfactory (RMSE = 3.94 to 6.38 g N m-2 y-1), although the model had difficulty to reproduce the inter-annual variability in the NO3-N leaching losses correctly (R2 = 0.11 to 0.35). Future climate datasets (2010-2099) from the 10 regional climate models (RCM) were used in the simulations. Regional NO3-N leaching predictions for conventional cropping system with a three years rotation (silage maize, potatoes and winter wheat) in Zurich and Bern cantons varied from 6.30 to 16.89 g N m-2 y-1 over a 30-years period. Further simulations and analyses will follow to provide insights into understanding of driving variables and patterns of N losses by leaching in response to changes from conventional to organic cropping systems, and climate change.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-01-01
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763
Summarization as the base for text assessment
NASA Astrophysics Data System (ADS)
Karanikolas, Nikitas N.
2015-02-01
We present a model that apply shallow text summarization as a cheap (in resources needed) process for Automatic (machine based) free text answer Assessment (AA). The evaluation of the proposed method induces the inference that the Conventional Assessment (CA, man made assessment of free text answers) does not have an obvious mechanical replacement. However, this is a research challenge.
NASA Astrophysics Data System (ADS)
Yeni, N.; Suryabayu, E. P.; Handayani, T.
2017-02-01
Based on the survey showed that mathematics teacher still dominated in teaching and learning process. The process of learning is centered on the teacher while the students only work based on instructions provided by the teacher without any creativity and activities that stimulate students to explore their potential. Realized the problem above the writer interested in finding the solution by applying teaching model ‘Learning Cycles 5E’. The purpose of his research is to know whether teaching model ‘Learning Cycles 5E’ is better than conventional teaching in teaching mathematic. The type of the research is quasi experiment by Randomized Control test Group Only Design. The population in this research were all X years class students. The sample is chosen randomly after doing normality, homogeneity test and average level of students’ achievement. As the sample of this research was X.7’s class as experiment class used teaching model learning cycles 5E and X.8’s class as control class used conventional teaching. The result showed us that the students achievement in the class that used teaching model ‘Learning Cycles 5E’ is better than the class which did not use the model.
Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea
NASA Astrophysics Data System (ADS)
Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming
2018-02-01
There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Enhancement of optical polarization degree of AlGaN quantum wells by using staggered structure.
Wang, Weiying; Lu, Huimin; Fu, Lei; He, Chenguang; Wang, Mingxing; Tang, Ning; Xu, Fujun; Yu, Tongjun; Ge, Weikun; Shen, Bo
2016-08-08
Staggered AlGaN quantum wells (QWs) are designed to enhance the transverse-electric (TE) polarized optical emission in deep ultraviolet (DUV) light- emitting diodes (LED). The optical polarization properties of the conventional and staggered AlGaN QWs are investigated by a theoretical model based on the k·p method as well as polarized photoluminescence (PL) measurements. Based on an analysis of the valence subbands and momentum matrix elements, it is found that AlGaN QWs with step-function-like Al content in QWs offers much stronger TE polarized emission in comparison to that from conventional AlGaN QWs. Experimental results show that the degree of the PL polarization at room temperature can be enhanced from 20.8% of conventional AlGaN QWs to 40.2% of staggered AlGaN QWs grown by MOCVD, which is in good agreement with the theoretical simulation. It suggests that polarization band engineering via staggered AlGaN QWs can be well applied in high efficiency AlGaN-based DUV LEDs.
Azuma, Masaki; Yanagawa, Toru; Ishibashi-Kanno, Naomi; Uchida, Fumihiko; Ito, Takaaki; Yamagata, Kenji; Hasegawa, Shogo; Sasaki, Kaoru; Adachi, Koji; Tabuchi, Katsuhiko; Sekido, Mitsuru; Bukawa, Hiroki
2014-10-23
Recently, medical rapid prototyping (MRP) models, fabricated with computer-aided design and computer-aided manufacture (CAD/CAM) techniques, have been applied to reconstructive surgery in the treatment of head and neck cancers. Here, we tested the use of preoperatively manufactured reconstruction plates, which were produced using MRP models. The clinical efficacy and esthetic outcome of using these products in mandibular reconstruction was evaluated. A series of 28 patients with malignant oral tumors underwent unilateral segmental resection of the mandible and simultaneous mandibular reconstruction. Twelve patients were treated with prebent reconstruction plates that were molded to MRP mandibular models designed with CAD/CAM techniques and fabricated on a combined powder bed and inkjet head three-dimensional printer. The remaining 16 patients were treated using conventional reconstruction methods. The surgical and esthetic outcomes of the two groups were compared by imaging analysis using post-operative panoramic tomography. The mandibular symmetry in patients receiving the MRP-model-based prebent plates was significantly better than that in patients receiving conventional reconstructive surgery. Patients with head and neck cancer undergoing reconstructive surgery using a prebent reconstruction plate fabricated according to an MRP mandibular model showed improved mandibular contour compared to patients undergoing conventional mandibular reconstruction. Thus, use of this new technology for mandibular reconstruction results in an improved esthetic outcome with the potential for improved quality of life for patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
Makeyev, Oleksandr; Ding, Quan; Martínez-Juárez, Iris E; Gaitanis, John; Kay, Steven M; Besio, Walter G
2013-01-01
As epilepsy affects approximately one percent of the world population, electrical stimulation of the brain has recently shown potential for additive seizure control therapy. Closed-loop systems that apply electrical stimulation when seizure onset is automatically detected require high accuracy of automatic seizure detection based on electrographic brain activity. To improve this accuracy we propose to use noninvasive tripolar concentric ring electrodes that have been shown to have significantly better signal-to-noise ratio, spatial selectivity, and mutual information compared to conventional disc electrodes. The proposed detection methodology is based on integration of multiple sensors using exponentially embedded family (EEF). In this preliminary study it is validated on over 26.3 hours of data collected using both tripolar concentric ring and conventional disc electrodes concurrently each from 7 human patients with epilepsy including five seizures. For a cross-validation based group model EEF correctly detected 100% and 80% of seizures respectively with <0.76 and <1.56 false positive detections per hour respectively for the two electrode modalities. These results clearly suggest the potential of seizure onset detection based on data from tripolar concentric ring electrodes.
Bazavov, A; Ding, H-T; Hegde, P; Kaczmarek, O; Karsch, F; Laermann, E; Maezawa, Y; Mukherjee, Swagato; Ohno, H; Petreczky, P; Schmidt, C; Sharma, S; Soeldner, W; Wagner, M
2014-08-15
We compare lattice QCD results for appropriate combinations of net strangeness fluctuations and their correlations with net baryon number fluctuations with predictions from two hadron resonance gas (HRG) models having different strange hadron content. The conventionally used HRG model based on experimentally established strange hadrons fails to describe the lattice QCD results in the hadronic phase close to the QCD crossover. Supplementing the conventional HRG with additional, experimentally uncharted strange hadrons predicted by quark model calculations and observed in lattice QCD spectrum calculations leads to good descriptions of strange hadron thermodynamics below the QCD crossover. We show that the thermodynamic presence of these additional states gets imprinted in the yields of the ground-state strange hadrons leading to a systematic 5-8 MeV decrease of the chemical freeze-out temperatures of ground-state strange baryons.
MRAC Revisited: Guaranteed Performance with Reference Model Modification
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmaje
2010-01-01
This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.
Separated transonic airfoil flow calculations with a nonequilibrium turbulence model
NASA Technical Reports Server (NTRS)
King, L. S.; Johnson, D. A.
1985-01-01
Navier-Stokes transonic airfoil calculations based on a recently developed nonequilibrium, turbulence closure model are presented for a supercritical airfoil section at transonic cruise conditions and for a conventional airfoil section at shock-induced stall conditions. Comparisons with experimental data are presented which show that this nonequilibrium closure model performs significantly better than the popular Baldwin-Lomax and Cebeci-Smith equilibrium algebraic models when there is boundary-layer separation that results from the inviscid-viscous interactions.
NASA Astrophysics Data System (ADS)
Ahmed, Asif; Ferdous, Imam Ul.; Saha, Sumon
2017-06-01
In the present study, three-dimensional numerical simulation of two shell-and-tube heat exchangers (STHXs) with conventional segmental baffles (STHXsSB) and continuous helical baffle (STHXsHB) is carried out and a comparative study is performed based on the simulation results. Both of the STHXs contain 37 tubes inside a 500 mm long and 200 mm diameter shell and mass flow rate of shell-side fluid is varied from 0.5 kg/s to 2 kg/s. At first, physical and mathematical models are developed and numerically simulated using finite element method (FEM). For the validation of the computational model, shell-side average nusselt number (Nus) is calculated from the simulation results and compared with the available experimental results. The comparative study shows that STHXsHB has 72-127% higher heat transfer coefficient per unit pressure drop compared to the conventional STHXsSB for the same shell-side mass flow rate. Moreover, STHXsHB has 59-63% lower shell-side pressure drop than STHXsSB.
Yeung, Joanne Chung Yan; de Lannoy, Inés; Gien, Brad; Vuckovic, Dajana; Yang, Yingbo; Bojko, Barbara; Pawliszyn, Janusz
2012-09-12
In vivo solid-phase microextraction (SPME) can be used to sample the circulating blood of animals without the need to withdraw a representative blood sample. In this study, in vivo SPME in combination with liquid-chromatography tandem mass spectrometry (LC-MS/MS) was used to determine the pharmacokinetics of two drug analytes, R,R-fenoterol and R,R-methoxyfenoterol, administered as 5 mg kg(-1) i.v. bolus doses to groups of 5 rats. This research illustrates, for the first time, the feasibility of the diffusion-based calibration interface model for in vivo SPME studies. To provide a constant sampling rate as required for the diffusion-based interface model, partial automation of the SPME sampling of the analytes from the circulating blood was accomplished using an automated blood sampling system. The use of the blood sampling system allowed automation of all SPME sampling steps in vivo, except for the insertion and removal of the SPME probe from the sampling interface. The results from in vivo SPME were compared to the conventional method based on blood withdrawal and sample clean up by plasma protein precipitation. Both whole blood and plasma concentrations were determined by the conventional method. The concentrations of methoxyfenoterol and fenoterol obtained by SPME generally concur with the whole blood concentrations determined by the conventional method indicating the utility of the proposed method. The proposed diffusion-based interface model has several advantages over other kinetic calibration models for in vivo SPME sampling including (i) it does not require the addition of a standard into the sample matrix during in vivo studies, (ii) it is simple and rapid and eliminates the need to pre-load appropriate standard onto the SPME extraction phase and (iii) the calibration constant for SPME can be calculated based on the diffusion coefficient, extraction time, fiber length and radius, and size of the boundary layer. In the current study, the experimental calibration constants of 338.9±30 mm(-3) and 298.5±25 mm(-3) are in excellent agreement with the theoretical calibration constants of 307.9 mm(-3) and 316.0 mm(-3) for fenoterol and methoxyfenoterol respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
JEDI Conventional Hydropower Model | Jobs and Economic Development Impact
Economic Development Impacts (JEDI) Conventional Hydropower Model allows users to estimate economic development impacts from conventional hydropower projects and includes default information that can be
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Fan, Yiren; Liu, Jianyu; Zhang, Li; Han, Yujiao; Xing, Donghui
2017-10-01
Permeability is an important parameter in formation evaluation since it controls the fluid transportation of porous rocks. However, it is challengeable to compute the permeability of bioclastic limestone reservoirs by conventional methods linking petrophysical and geophysical data, due to the complex pore distributions. A new method is presented to estimate the permeability based on laboratory and downhole nuclear magnetic resonance (NMR) measurements. We divide the pore space into four intervals by the inflection points between the pore radius and the transversal relaxation time. Relationships between permeability and percentages of different pore intervals are investigated to investigate influential factors on the fluid transportation. Furthermore, an empirical model, which takes into account of the pore size distributions, is presented to compute the permeability. 212 core samples in our case show that the accuracy of permeability calculation is improved from 0.542 (SDR model), 0.507 (TIM model), 0.455 (conventional porosity-permeability regressions) to 0.803. To enhance the precision of downhole application of the new model, we developed a fluid correction algorithm to construct the water spectrum of in-situ NMR data, aiming to eliminate the influence of oil on the magnetization. The result reveals that permeability is positively correlated with percentages of mega-pores and macro-pores, but negatively correlated with the percentage of micro-pores. Poor correlation is observed between permeability and the percentage of meso-pores. NMR magnetizations and T2 spectrums after the fluid correction agree well with laboratory results for samples saturated with water. Field application indicates that the improved method provides better performance than conventional models such as Schlumberger-Doll Research equation, Timur-Coates equation, and porosity-permeability regressions.
Trend-Residual Dual Modeling for Detection of Outliers in Low-Cost GPS Trajectories
Chen, Xiaojian; Cui, Tingting; Fu, Jianhong; Peng, Jianwei; Shan, Jie
2016-01-01
Low-cost GPS (receiver) has become a ubiquitous and integral part of our daily life. Despite noticeable advantages such as being cheap, small, light, and easy to use, its limited positioning accuracy devalues and hampers its wide applications for reliable mapping and analysis. Two conventional techniques to remove outliers in a GPS trajectory are thresholding and Kalman-based methods, which are difficult in selecting appropriate thresholds and modeling the trajectories. Moreover, they are insensitive to medium and small outliers, especially for low-sample-rate trajectories. This paper proposes a model-based GPS trajectory cleaner. Rather than examining speed and acceleration or assuming a pre-determined trajectory model, we first use cubic smooth spline to adaptively model the trend of the trajectory. The residuals, i.e., the differences between the trend and GPS measurements, are then further modeled by time series method. Outliers are detected by scoring the residuals at every GPS trajectory point. Comparing to the conventional procedures, the trend-residual dual modeling approach has the following features: (a) it is able to model trajectories and detect outliers adaptively; (b) only one critical value for outlier scores needs to be set; (c) it is able to robustly detect unapparent outliers; and (d) it is effective in cleaning outliers for GPS trajectories with low sample rates. Tests are carried out on three real-world GPS trajectories datasets. The evaluation demonstrates an average of 9.27 times better performance in outlier detection for GPS trajectories than thresholding and Kalman-based techniques. PMID:27916944
NASA Astrophysics Data System (ADS)
Grujicic, M.; Bell, W. C.; Arakere, G.; He, T.; Xie, X.; Cheeseman, B. A.
2010-02-01
A meso-scale ballistic material model for a prototypical plain-woven single-ply flexible armor is developed and implemented in a material user subroutine for the use in commercial explicit finite element programs. The main intent of the model is to attain computational efficiency when calculating the mechanical response of the multi-ply fabric-based flexible-armor material during its impact with various projectiles without significantly sacrificing the key physical aspects of the fabric microstructure, architecture, and behavior. To validate the new model, a comparative finite element method analysis is carried out in which: (a) the plain-woven single-ply fabric is modeled using conventional shell elements and weaving is done in an explicit manner by snaking the yarns through the fabric and (b) the fabric is treated as a planar continuum surface composed of conventional shell elements to which the new meso-scale unit-cell based material model is assigned. The results obtained show that the material model provides a reasonably good description for the fabric deformation and fracture behavior under different combinations of fixed and free boundary conditions. Finally, the model is used in an investigation of the ability of a multi-ply soft-body armor vest to protect the wearer from impact by a 9-mm round nose projectile. The effects of inter-ply friction, projectile/yarn friction, and the far-field boundary conditions are revealed and the results explained using simple wave mechanics principles, high-deformation rate material behavior, and the role of various energy-absorbing mechanisms in the fabric-based armor systems.
Zhen, Chen; Brissette, Ian F.; Ruff, Ryan R.
2014-01-01
The obesity epidemic and excessive consumption of sugar-sweetened beverages have led to proposals of economics-based interventions to promote healthy eating in the United States. Targeted food and beverage taxes and subsidies are prominent examples of such potential intervention strategies. This paper examines the differential effects of taxing sugar-sweetened beverages by calories and by ounces on beverage demand. To properly measure the extent of substitution and complementarity between beverage products, we developed a fully modified distance metric model of differentiated product demand that endogenizes the cross-price effects. We illustrated the proposed methodology in a linear approximate almost ideal demand system, although other flexible demand systems can also be used. In the empirical application using supermarket scanner data, the product-level demand model consists of 178 beverage products with combined market share of over 90%. The novel demand model outperformed the conventional distance metric model in non-nested model comparison tests and in terms of the economic significance of model predictions. In the fully modified model, a calorie-based beverage tax was estimated to cost $1.40 less in compensating variation than an ounce-based tax per 3,500 beverage calories reduced. This difference in welfare cost estimates between two tax strategies is more than three times as much as the difference estimated by the conventional distance metric model. If applied to products purchased from all sources, a 0.04-cent per kcal tax on sugar-sweetened beverages is predicted to reduce annual per capita beverage intake by 5,800 kcal. PMID:25414517
Anovulation and ovulation induction
Katsikis, I; Kita, M; Karkanaki, A; Prapas, N; Panidis, D
2006-01-01
Conventional treatment of normogonadotropic anovulatory infertility is ovulation induction using the antiestrogen clomiphene citrate, followed by follicle-stimulating hormone. Multiple follicle development, associated with ovarian hyperstimulation, and multiple pregnancy remain the major complications. Cumulative singleton and multiple pregnancy rate data after different induction treatments are needed. Newer ovulation induction interventions, such as insulin-sensitizing drugs, aromatase inhibitors and laparoscopic ovarian electrocoagulation, should be compared with conventional treatments. Ovulation induction efficiency might improve if patient subgroups with altered chances for success or complications with new or conventional techniques could be identified, using multivariate prediction models based on initial screening characteristics. This would make ovulation induction more cost-effective, safe and convenient, enabling doctors to advise patients on the most effective and patient-tailored treatment strategy. PMID:20351807
Approaching mathematical model of the immune network based DNA Strand Displacement system.
Mardian, Rizki; Sekiyama, Kosuke; Fukuda, Toshio
2013-12-01
One biggest obstacle in molecular programming is that there is still no direct method to compile any existed mathematical model into biochemical reaction in order to solve a computational problem. In this paper, the implementation of DNA Strand Displacement system based on nature-inspired computation is observed. By using the Immune Network Theory and Chemical Reaction Network, the compilation of DNA-based operation is defined and the formulation of its mathematical model is derived. Furthermore, the implementation on this system is compared with the conventional implementation by using silicon-based programming. From the obtained results, we can see a positive correlation between both. One possible application from this DNA-based model is for a decision making scheme of intelligent computer or molecular robot. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Sisco, Melissa M.; Figueredo, Aurelio Jose
2008-01-01
Surveys and focus groups were administered to two samples of US university undergraduates to compare sexual aggression prevalence as assessed based on the Power-Assertion model (n = 139) versus the Confluence model (n = 318). Men were more likely to commit all illegal acts, especially conventional rape. Women also committed illegal acts,…
NASA Astrophysics Data System (ADS)
Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang
2017-03-01
Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.
Wave Attenuation and Gas Exchange Velocity in Marginal Sea Ice Zone
NASA Astrophysics Data System (ADS)
Bigdeli, A.; Hara, T.; Loose, B.; Nguyen, A. T.
2018-03-01
The gas transfer velocity in marginal sea ice zones exerts a strong control on the input of anthropogenic gases into the ocean interior. In this study, a sea state-dependent gas exchange parametric model is developed based on the turbulent kinetic energy dissipation rate. The model is tuned to match the conventional gas exchange parametrization in fetch-unlimited, fully developed seas. Next, fetch limitation is introduced in the model and results are compared to fetch limited experiments in lakes, showing that the model captures the effects of finite fetch on gas exchange with good fidelity. Having validated the results in fetch limited waters such as lakes, the model is next applied in sea ice zones using an empirical relation between the sea ice cover and the effective fetch, while accounting for the sea ice motion effect that is unique to sea ice zones. The model results compare favorably with the available field measurements. Applying this parametric model to a regional Arctic numerical model, it is shown that, under the present conditions, gas flux into the Arctic Ocean may be overestimated by 10% if a conventional parameterization is used.
Consideration of VT5 etch-based OPC modeling
NASA Astrophysics Data System (ADS)
Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin
2008-03-01
Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Xiangyang; Yang Yi; Tang Shaojie
Purpose: Differential phase contrast CT (DPC-CT) is emerging as a new technology to improve the contrast sensitivity of conventional attenuation-based CT. The noise equivalent quanta as a function over spatial frequency, i.e., the spectrum of noise equivalent quanta NEQ(k), is a decisive indicator of the signal and noise transfer properties of an imaging system. In this work, we derive the functional form of NEQ(k) in DPC-CT. Via system modeling, analysis, and computer simulation, we evaluate and verify the derived NEQ(k) and compare it with that of the conventional attenuation-based CT. Methods: The DPC-CT is implemented with x-ray tube and gratings.more » The x-ray propagation and data acquisition are modeled and simulated through Fresnel and Fourier analysis. A monochromatic x-ray source (30 keV) is assumed to exclude any system imperfection and interference caused by scatter and beam hardening, while a 360 Degree-Sign full scan is carried out in data acquisition to avoid any weighting scheme that may disrupt noise randomness. Adequate upsampling is implemented to simulate the x-ray beam's propagation through the gratings G{sub 1} and G{sub 2} with periods 8 and 4 {mu}m, respectively, while the intergrating distance is 193.6 mm (1/16 of the Talbot distance). The dimensions of the detector cell for data acquisition are 32 Multiplication-Sign 32, 64 Multiplication-Sign 64, 96 Multiplication-Sign 96, and 128 Multiplication-Sign 128 {mu}m{sup 2}, respectively, corresponding to a 40.96 Multiplication-Sign 40.96 mm{sup 2} field of view in data acquisition. An air phantom is employed to obtain the noise power spectrum NPS(k), spectrum of noise equivalent quanta NEQ(k), and detective quantum efficiency DQE(k). A cylindrical water phantom at 5.1 mm diameter and complex refraction coefficient n= 1 -{delta}+i{beta}= 1 -2.5604 Multiplication-Sign 10{sup -7}+i1.2353 Multiplication-Sign 10{sup -10} is placed in air to measure the edge transfer function, line spread function and then modulation transfer function MTF(k), of both DPC-CT and the conventional attenuation-based CT. The x-ray flux is set at 5 Multiplication-Sign 10{sup 6} photon/cm{sup 2} per projection and observes the Poisson distribution, which is consistent with that of a micro-CT for preclinical applications. Approximately 360 regions, each at 128 Multiplication-Sign 128 matrix, are used to calculate the NPS(k) via 2D Fourier transform, in which adequate zero padding is carried out to avoid aliasing in noise. Results: The preliminary data show that the DPC-CT possesses a signal transfer property [MTF(k)] comparable to that of the conventional attenuation-based CT. Meanwhile, though there exists a radical difference in their noise power spectrum NPS(k) (trait 1/|k| in DPC-CT but |k| in the conventional attenuation-based CT) the NEQ(k) and DQE(k) of DPC-CT and the conventional attenuation-based CT are in principle identical. Conclusions: Under the framework of ideal observer study, the joint signal and noise transfer property NEQ(k) and detective quantum efficiency DQE(k) of DPC-CT are essentially the same as those of the conventional attenuation-based CT. The findings reported in this paper may provide insightful guidelines on the research, development, and performance optimization of DPC-CT for extensive preclinical and clinical applications in the future.« less
NASA Astrophysics Data System (ADS)
Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian
2018-03-01
The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.
Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian
2018-03-28
The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
XFEM-based modeling of successive resections for preoperative image updating
NASA Astrophysics Data System (ADS)
Vigneron, Lara M.; Robe, Pierre A.; Warfield, Simon K.; Verly, Jacques G.
2006-03-01
We present a new method for modeling organ deformations due to successive resections. We use a biomechanical model of the organ, compute its volume-displacement solution based on the eXtended Finite Element Method (XFEM). The key feature of XFEM is that material discontinuities induced by every new resection can be handled without remeshing or mesh adaptation, as would be required by the conventional Finite Element Method (FEM). We focus on the application of preoperative image updating for image-guided surgery. Proof-of-concept demonstrations are shown for synthetic and real data in the context of neurosurgery.
Security analysis of quadratic phase based cryptography
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Healy, John J.; Sheridan, John T.
2016-09-01
The linear canonical transform (LCT) is essential in modeling a coherent light field propagation through first-order optical systems. Recently, a generic optical system, known as a Quadratic Phase Encoding System (QPES), for encrypting a two-dimensional (2D) image has been reported. It has been reported together with two phase keys the individual LCT parameters serve as keys of the cryptosystem. However, it is important that such the encryption systems also satisfies some dynamic security properties. Therefore, in this work, we examine some cryptographic evaluation methods, such as Avalanche Criterion and Bit Independence, which indicates the degree of security of the cryptographic algorithms on QPES. We compare our simulation results with the conventional Fourier and the Fresnel transform based DRPE systems. The results show that the LCT based DRPE has an excellent avalanche and bit independence characteristics than that of using the conventional Fourier and Fresnel based encryption systems.
A Lagrangian mixing frequency model for transported PDF modeling
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Zhao, Xinyu
2017-11-01
In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.
ERIC Educational Resources Information Center
Park, Sung Youl; Kim, Soo-Wook; Cha, Seung-Bong; Nam, Min-Woo
2014-01-01
This study investigated the effectiveness of e-learning by comparing the learning outcomes in conventional face-to-face lectures and e-learning methods. Two video-based e-learning contents were developed based on the rapid prototyping model and loaded onto the learning management system (LMS), which was available at http://www.greenehrd.com.…
Puncturing the myths of acupuncture.
Mallory, Molly J; Do, Alexander; Bublitz, Sara E; Veleber, Susan J; Bauer, Brent A; Bhagra, Anjali
2016-09-01
Acupuncture is a widely practiced system of medicine that has been in place for thousands of years. Consumer interest and use of acupuncture are becoming increasingly popular in the United States, as it is used to treat a multitude of symptoms and disease processes as well as to maintain health and prevent illness. A growing body of evidence increasingly validates the practice of acupuncture. Further developing scientific data will play an important role in the future of acupuncture and other complementary and alternative medicines in public health. Acupuncture is commonly used concurrently with conventional medicine. Although acupuncture is embraced by consumers and medical professionals, misconceptions abound. We have explored and dispelled ten misconceptions common to the practice of acupuncture, utilizing an evidence-based approach. As the trend of merging conventional medical care with acupuncture treatment grows, it is important to develop a conceptual model of integrative medicine. Using a scientific evidence approach will create a structure from which to begin and grow confidence among conventional medical providers. Acupuncture is a safe and effective modality when performed properly by trained professionals. Educating both the consumer and medical community is important to enable appropriate and evidence-based applications of acupuncture and integration with conventional medicine for high-quality patient care.
K, Jalal Deen; R, Ganesan; A, Merline
2017-07-27
Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License
A High Performance Impedance-based Platform for Evaporation Rate Detection.
Chou, Wei-Lung; Lee, Pee-Yew; Chen, Cheng-You; Lin, Yu-Hsin; Lin, Yung-Sheng
2016-10-17
This paper describes the method of a novel impedance-based platform for the detection of the evaporation rate. The model compound hyaluronic acid was employed here for demonstration purposes. Multiple evaporation tests on the model compound as a humectant with various concentrations in solutions were conducted for comparison purposes. A conventional weight loss approach is known as the most straightforward, but time-consuming, measurement technique for evaporation rate detection. Yet, a clear disadvantage is that a large volume of sample is required and multiple sample tests cannot be conducted at the same time. For the first time in literature, an electrical impedance sensing chip is successfully applied to a real-time evaporation investigation in a time sharing, continuous and automatic manner. Moreover, as little as 0.5 ml of test samples is required in this impedance-based apparatus, and a large impedance variation is demonstrated among various dilute solutions. The proposed high-sensitivity and fast-response impedance sensing system is found to outperform a conventional weight loss approach in terms of evaporation rate detection.
K, Jalal Deen; R, Ganesan; A, Merline
2017-01-01
Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127
End-to-End ASR-Free Keyword Search From Speech
NASA Astrophysics Data System (ADS)
Audhkhasi, Kartik; Rosenberg, Andrew; Sethy, Abhinav; Ramabhadran, Bhuvana; Kingsbury, Brian
2017-12-01
End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid hidden Markov model (HMM)-deep neural network based automatic speech recognition (ASR) systems. Such E2E systems are attractive due to the lack of dependence on alignments between input acoustic and output grapheme or HMM state sequence during training. This paper explores the design of an ASR-free end-to-end system for text query-based keyword search (KWS) from speech trained with minimal supervision. Our E2E KWS system consists of three sub-systems. The first sub-system is a recurrent neural network (RNN)-based acoustic auto-encoder trained to reconstruct the audio through a finite-dimensional representation. The second sub-system is a character-level RNN language model using embeddings learned from a convolutional neural network. Since the acoustic and text query embeddings occupy different representation spaces, they are input to a third feed-forward neural network that predicts whether the query occurs in the acoustic utterance or not. This E2E ASR-free KWS system performs respectably despite lacking a conventional ASR system and trains much faster.
Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation
NASA Astrophysics Data System (ADS)
Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting
2014-12-01
This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.
10 CFR 429.23 - Conventional cooking tops, conventional ovens, microwave ovens.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Conventional cooking tops, conventional ovens, microwave... Conventional cooking tops, conventional ovens, microwave ovens. (a) Sampling plan for selection of units for... and microwave ovens; and (2) For each basic model of conventional cooking tops, conventional ovens and...
10 CFR 429.23 - Conventional cooking tops, conventional ovens, microwave ovens.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Conventional cooking tops, conventional ovens, microwave... Conventional cooking tops, conventional ovens, microwave ovens. (a) Sampling plan for selection of units for... and microwave ovens; and (2) For each basic model of conventional cooking tops, conventional ovens and...
10 CFR 429.23 - Conventional cooking tops, conventional ovens, microwave ovens.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Conventional cooking tops, conventional ovens, microwave... Conventional cooking tops, conventional ovens, microwave ovens. (a) Sampling plan for selection of units for... and microwave ovens; and (2) For each basic model of conventional cooking tops, conventional ovens and...
Family history and risk of breast cancer: an analysis accounting for family structure.
Brewer, Hannah R; Jones, Michael E; Schoemaker, Minouk J; Ashworth, Alan; Swerdlow, Anthony J
2017-08-01
Family history is an important risk factor for breast cancer incidence, but the parameters conventionally used to categorize it are based solely on numbers and/or ages of breast cancer cases in the family and take no account of the size and age-structure of the woman's family. Using data from the Generations Study, a cohort of over 113,000 women from the general UK population, we analyzed breast cancer risk in relation to first-degree family history using a family history score (FHS) that takes account of the expected number of family cases based on the family's age-structure and national cancer incidence rates. Breast cancer risk increased significantly (P trend < 0.0001) with greater FHS. There was a 3.5-fold (95% CI 2.56-4.79) range of risk between the lowest and highest FHS groups, whereas women who had two or more relatives with breast cancer, the strongest conventional familial risk factor, had a 2.5-fold (95% CI 1.83-3.47) increase in risk. Using likelihood ratio tests, the best model for determining breast cancer risk due to family history was that combining FHS and age of relative at diagnosis. A family history score based on expected as well as observed breast cancers in a family can give greater risk discrimination on breast cancer incidence than conventional parameters based solely on cases in affected relatives. Our modeling suggests that a yet stronger predictor of risk might be a combination of this score and age at diagnosis in relatives.
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
General framework for dynamic large deformation contact problems based on phantom-node X-FEM
NASA Astrophysics Data System (ADS)
Broumand, P.; Khoei, A. R.
2018-04-01
This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.
Del Prado, A; Misselbrook, T; Chadwick, D; Hopkins, A; Dewhurst, R J; Davison, P; Butler, A; Schröder, J; Scholefield, D
2011-09-01
Multiple demands are placed on farming systems today. Society, national legislation and market forces seek what could be seen as conflicting outcomes from our agricultural systems, e.g. food quality, affordable prices, a healthy environmental, consideration of animal welfare, biodiversity etc., Many of these demands, or desirable outcomes, are interrelated, so reaching one goal may often compromise another and, importantly, pose a risk to the economic viability of the farm. SIMS(DAIRY), a farm-scale model, was used to explore this complexity for dairy farm systems. SIMS(DAIRY) integrates existing approaches to simulate the effect of interactions between farm management, climate and soil characteristics on losses of nitrogen, phosphorus and carbon. The effects on farm profitability and attributes of biodiversity, milk quality, soil quality and animal welfare are also included. SIMS(DAIRY) can also be used to optimise fertiliser N. In this paper we discuss some limitations and strengths of using SIMS(DAIRY) compared to other modelling approaches and propose some potential improvements. Using the model we evaluated the sustainability of organic dairy systems compared with conventional dairy farms under non-optimised and optimised fertiliser N use. Model outputs showed for example, that organic dairy systems based on grass-clover swards and maize silage resulted in much smaller total GHG emissions per l of milk and slightly smaller losses of NO(3) leaching and NO(x) emissions per l of milk compared with the grassland/maize-based conventional systems. These differences were essentially because the conventional systems rely on indirect energy use for 'fixing' N compared with biological N fixation for the organic systems. SIMS(DAIRY) runs also showed some other potential benefits from the organic systems compared with conventional systems in terms of financial performance and soil quality and biodiversity scores. Optimisation of fertiliser N timings and rates showed a considerable scope to reduce the (GHG emissions per l milk too). Copyright © 2011 Elsevier B.V. All rights reserved.
Betts, Keith A; Griffith, Jenny; Ganguli, Arijit; Li, Nanxin; Douglas, Kevin; Wu, Eric Q
2016-05-01
To assess the economic outcomes and treatment patterns among patients with rheumatoid arthritis (RA) who used 1, 2, or 3 or more conventional synthetic disease-modifying antirheumatic drugs (DMARDs) before receiving a biologic therapy. Adult patients with ≥2 RA diagnoses (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] codes 714.xx) on different dates, ≥1 claim for a conventional synthetic DMARD, and ≥1 claim for a biologic DMARD were identified from a large commercial claims database. The initiation date of the first biologic DMARD was defined as the index date. Based on the number of distinct conventional synthetic DMARDs initiated between the first RA diagnosis and the index date, patients were classified into 3 cohorts: those who used 1, 2, or 3 or more conventional synthetic DMARDs. Baseline characteristics were measured 6 months preindex date and compared between the 3 cohorts. All-cause health care costs (in 2014 US$) were compared during the follow-up period (12 months postbiologic initiation) using multivariable gamma models adjusting for baseline characteristics. Time to discontinuation of the index biologic DMARD and time to switching to a new DMARD were compared using multivariable Cox proportional hazards models. The 1, 2, and 3 or more conventional synthetic DMARD cohorts included 6215; 3227; and 976 patients, respectively. At baseline, patients in the 3 or more conventional synthetic DMARD cohort had the least severe RA, as indicated by the lowest claims-based index for RA severity score (1 vs 2 vs 3 or more = 6.1 vs 5.9 vs 5.8). During the study period, there was a significant association between number of conventional synthetic DMARDs and higher all-cause total health care costs (adjusted mean difference, 1 vs 2: $772; P < 0.001; 2 vs 3 or more: $2390; P < 0.001). The all-cause medical and pharmacy costs were also significantly higher with the increasing number of conventional synthetic DMARDs. Patients who cycled more conventional synthetic DMARDs were also more likely to switch treatment after biologic initiation (1 vs 2: adjusted hazard ratio = 0.89; P = 0.005; 2 vs 3 or more: adjusted hazard ratio = 0.89; P = 0.087). There were no differences in index biologic discontinuation between the 3 cohorts. Patients with RA who cycled more conventional synthetic DMARDs had increased economic burden in the 12 months following biologic initiation and were more likely to switch therapy. These results highlight the importance of timely switching to biologic DMARDs for the treatment of RA. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
NASA Astrophysics Data System (ADS)
Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.
2016-01-01
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
NASA Astrophysics Data System (ADS)
Murtazina, M. Sh; Avdeenko, T. V.
2018-05-01
The state of art and the progress in application of semantic technologies in the field of scientific and research activity have been analyzed. Even elementary empirical comparison has shown that the semantic search engines are superior in all respects to conventional search technologies. However, semantic information technologies are insufficiently used in the field of scientific and research activity in Russia. In present paper an approach to construction of ontological model of knowledge base is proposed. The ontological model is based on the upper-level ontology and the RDF mechanism for linking several domain ontologies. The ontological model is implemented in the Protégé environment.
Category-theoretic models of algebraic computer systems
NASA Astrophysics Data System (ADS)
Kovalyov, S. P.
2016-01-01
A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.
NASA Astrophysics Data System (ADS)
Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati
2018-04-01
The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.
Neural correlates of conventional and harm/welfare-based moral decision-making.
White, Stuart F; Zhao, Hui; Leong, Kelly Kimiko; Smetana, Judith G; Nucci, Larry P; Blair, R James R
2017-12-01
The degree to which social norms are processed by a unitary system or dissociable systems remains debated. Much research on children's social-cognitive judgments has supported the distinction between "moral" (harm/welfare-based) and "conventional" norms. However, the extent to which these norms are processed by dissociable neural systems remains unclear. To address this issue, 23 healthy participants were scanned with functional magnetic resonance imaging (fMRI) while they rated the wrongness of harm/welfare-based and conventional transgressions and neutral vignettes. Activation significantly greater than the neutral vignette baseline was observed in regions implicated in decision-making regions including rostral/ventral medial frontal, anterior insula and dorsomedial frontal cortices when evaluating both harm/welfare-based and social-conventional transgressions. Greater activation when rating harm/welfare-based relative to social-conventional transgressions was seen through much of ACC and bilateral inferior frontal gyrus. Greater activation was observed in superior temporal gyrus, bilateral middle temporal gyrus, left PCC, and temporal-parietal junction when rating social-conventional transgressions relative to harm/welfare-based transgressions. These data suggest that decisions regarding the wrongness of actions, irrespective of whether they involve care/harm-based or conventional transgressions, recruit regions generally implicated in affect-based decision-making. However, there is neural differentiation between harm/welfare-based and conventional transgressions. This may reflect the particular importance of processing the intent of transgressors of conventional norms and perhaps the greater emotional content or salience of harm/welfare-based transgressions.
Alternative refrigerants and refrigeration cycles for domestic refrigerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sand, J.R.; Rice, C.L.; Vineyard, E.A.
1992-12-01
This project initially focused on using nonazeotropic refrigerant mixtures (NARMs) in a two-evaporator refrigerator-freezer design using two stages of liquid refrigerant subcooling. This concept was proposed and tested in 1975. The work suggested that the concept was 20% more efficient than the conventional one-evaporator refrigerator-freezer (RF) design. After considerable planning and system modeling based on using a NARM in a Lorenz-Meutzner (L-M) RF, the program scope was broadened to include investigation of a ``dual-loop`` concept where energy savings result from exploiting the less stringent operating conditions needed to satisfy cooling, of the fresh food section. A steady-state computer model (CYCLE-Z)more » capable of simulating conventional, dual loop, and L-M refrigeration cycles was developed. This model was used to rank the performance of 20 ozone-safe NARMs in the L-M refrigeration cycle while key system parameters were systematically varied. The results indicated that the steady-state efficiency of the L-M design was up to 25% greater than that of a conventional cycle. This model was also used to calculate the performance of other pure refrigerants relative to that of dichlorodifluoromethane, R-12, in conventional and dual-loop RF designs. Projected efficiency gains for these cycles were more modest, ranging from 0 to 10%. Individual compressor calorimeter tests of nine combinations of evaporator and condenser temperatures usually used to map RF compressor performance were carried out with R-12 and two candidate L-M NARMs in several compressors. Several models of a commercially produced two-evaporator RF were obtained as test units. Two dual-loop RF designs were built and tested as part of this project.« less
NASA Astrophysics Data System (ADS)
Zarifakis, Marios; Coffey, William T.; Kalmykov, Yuri P.; Titov, Sergei V.
2017-06-01
An ever-increasing requirement to integrate greater amounts of electrical energy from renewable sources especially from wind turbines and solar photo-voltaic installations exists and recent experience in the island of Ireland demonstrates that this requirement influences the behaviour of conventional generating stations. One observation is the change in the electrical power output of synchronous generators following a transient disturbance especially their oscillatory behaviour accompanied by similar oscillatory behaviour of the grid frequency, both becoming more pronounced with reducing grid inertia. This behaviour cannot be reproduced with existing mathematical models indicating that an understanding of the behaviour of synchronous generators, subjected to various disturbances especially in a system with low inertia requires a new modelling technique. Thus two models of a generating station based on a double pendulum described by a system of coupled nonlinear differential equations and suitable for analysis of its stability corresponding to infinite or finite grid inertia are presented. Formal analytic solutions of the equations of motion are given and compared with numerical solutions. In particular the new finite grid model will allow one to identify limitations to the operational range of the synchronous generators used in conventional power generation and also to identify limits, such as the allowable Rate of Change of Frequency which is currently set to ± 0.5 Hz/s and is a major factor in describing the volatility of a grid as well as identifying requirements to the total inertia necessary, which is currently provided by conventional power generators only, thus allowing one to maximise the usage of grid connected non-synchronous generators, e.g., wind turbines and solar photo-voltaic installations.
The Value of Circulating Biomarkers in Bicuspid Aortic Valve-Associated Aortopathy.
Naito, Shiho; Hillebrand, Mathias; Bernhardt, Alexander Martin Justus; Jagodzinski, Annika; Conradi, Lenard; Detter, Christian; Sydow, Karsten; Reichenspurner, Hermann; Kodolitsch, Yskert von; Girdauskas, Evaldas
2018-06-01
Traditional risk stratification model of bicuspid aortic valve (BAV) aortopathy is based on measurement of maximal cross-sectional aortic diameter, definition of proximal aortic shape, and aortic stiffness/elasticity parameters. However, conventional imaging-based criteria are unable to provide reliable information regarding the risk stratification in BAV aortopathy, especially considering the heterogeneous nature of BAV disease. Given those limitations of conventional imaging, there is a growing clinical interest to use circulating biomarkers in the screening process for thoracic aortic aneurysms as well as in the risk-assessment algorithms. We aimed to systematically review currently available biomarkers, which may be of value to predict the natural evolution of aortopathy in individuals with BAV. Georg Thieme Verlag KG Stuttgart · New York.
Tóth-Nagy, Csaba; Conley, John J; Jarrett, Ronald P; Clark, Nigel N
2006-07-01
With the advent of hybrid electric vehicles, computer-based vehicle simulation becomes more useful to the engineer and designer trying to optimize the complex combination of control strategy, power plant, drive train, vehicle, and driving conditions. With the desire to incorporate emissions as a design criterion, researchers at West Virginia University have developed artificial neural network (ANN) models for predicting emissions from heavy-duty vehicles. The ANN models were trained on engine and exhaust emissions data collected from transient dynamometer tests of heavy-duty diesel engines then used to predict emissions based on engine speed and torque data from simulated operation of a tractor truck and hybrid electric bus. Simulated vehicle operation was performed with the ADVISOR software package. Predicted emissions (carbon dioxide [CO2] and oxides of nitrogen [NO(x)]) were then compared with actual emissions data collected from chassis dynamometer tests of similar vehicles. This paper expands on previous research to include different driving cycles for the hybrid electric bus and varying weights of the conventional truck. Results showed that different hybrid control strategies had a significant effect on engine behavior (and, thus, emissions) and may affect emissions during different driving cycles. The ANN models underpredicted emissions of CO2 and NO(x) in the case of a class-8 truck but were more accurate as the truck weight increased.
Design of novel dual-port tapered waveguide plasma apparatus by numerical analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D.; Zhou, R.; Yang, X. Q., E-mail: yyxxqq-mail@163.com
Microwave plasma apparatus is often of particular interest due to their superiority of low cost, electrode contamination free, and suitability for industrial production. However, there exist problems of unstable plasma and low electron density in conventional waveguide apparatus based on single port, due to low strength and non-uniformity of microwave field. This study proposes a novel dual-port tapered waveguide plasma apparatus based on power-combining technique, to improve the strength and uniformity of microwave field for the applications of plasma. A 3D model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. On the condition thatmore » the total input power is 500 W, simulations indicate that coherent power-combining will maximize the electric-field strength to 3.32 × 10{sup 5 }V/m and improve the uniformity of distributed microwave field, which raised 36.7% and 47.2%, respectively, compared to conventional waveguide apparatus of single port. To study the optimum conditions for industrial application, a 2D argon fluid model based on above structure is presented. It demonstrates that relatively uniform and high-density plasma is obtained at an argon flow rate of 200 ml/min. The contrastive result of electric-field distribution, electron density, and gas temperature is also valid and clearly proves the superiority of coherent power-combining to conventional technique in flow field.« less
Review and evaluation of models that produce trip tables from ground counts : interim report.
DOT National Transportation Integrated Search
1996-01-01
This research effort was motivated by the desires of planning agencies to seek alternative methods of deriving current or base year Origin-Destination (O-D) trip tables without adopting conventional O-D surveys that are expensive, time consuming and ...
INFLUENCE OF STRATIGRAPHY ON A DIVING MTBE PLUME AND ITS CHARACTERIZATION: A CASE STUDY
Conventional conceptual models applied at petroleum release sites are often based on assumptions of vertical contaminant migration through the vadose zone followed by horizontal, downgradient transport at the water table with limited, if any, additional downward migration. Howev...
IMPLEMENTATION OF GREEN ROOF SUSTAINABILITY IN ARID CONDITIONS
We successfully designed and fabricated accurately scaled prototypes of a green roof and a conventional white roof and began testing in simulated conditions of 115-70°F with relative humidity of 13%. The design parameters were based on analytical models created through ver...
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Zhou; H. Huang; M. Deo
Log and seismic data indicate that most shale formations have strong heterogeneity. Conventional analytical and semi-analytical fracture models are not enough to simulate the complex fracture propagation in these highly heterogeneous formation. Without considering the intrinsic heterogeneity, predicted morphology of hydraulic fracture may be biased and misleading in optimizing the completion strategy. In this paper, a fully coupling fluid flow and geomechanics hydraulic fracture simulator based on dual-lattice Discrete Element Method (DEM) is used to predict the hydraulic fracture propagation in heterogeneous reservoir. The heterogeneity of rock is simulated by assigning different material force constant and critical strain to differentmore » particles and is adjusted by conditioning to the measured data and observed geological features. Based on proposed model, the effects of heterogeneity at different scale on micromechanical behavior and induced macroscopic fractures are examined. From the numerical results, the microcrack will be more inclined to form at the grain weaker interface. The conventional simulator with homogeneous assumption is not applicable for highly heterogeneous shale formation.« less
Introducing the VRT gas turbine combustor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melconian, J.O.; Mostafa, A.A.; Nguyen, H.L.
An innovative annular combustor configuration is being developed for aircraft and other gas turbine engines. This design has the potential of permitting higher turbine inlet temperatures by reducing the pattern factor and providing a major reduction in NO(x) emission. The design concept is based on a Variable Residence Time (VRT) technique which allows large fuel particles adequate time to completely burn in the circumferentially mixed primary zone. High durability of the combustor is achieved by dual function use of the incoming air. The feasibility of the concept was demonstrated by water analogue tests and 3-D computer modeling. The computer modelmore » predicted a 50 percent reduction in pattern factor when compared to a state of the art conventional combustor. The VRT combustor uses only half the number of fuel nozzles of the conventional configuration. The results of the chemical kinetics model require further investigation, as the NO(x) predictions did not correlate with the available experimental and analytical data base.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huynh, E; Coroller, T; Narayan, V
Purpose: Stereotactic body radiation therapy (SBRT) is the standard of care for medically inoperable non-small cell lung cancer (NSCLC) patients and has demonstrated excellent local control and survival. However, some patients still develop distant metastases and local recurrence, and therefore, there is a clinical need to identify patients at high-risk of disease recurrence. The aim of the current study is to use a radiomics approach to identify imaging biomarkers, based on tumor phenotype, for clinical outcomes in SBRT patients. Methods: Radiomic features were extracted from free breathing computed tomography (CT) images of 113 Stage I-II NSCLC patients treated with SBRT.more » Their association to and prognostic performance for distant metastasis (DM), locoregional recurrence (LRR) and survival was assessed and compared with conventional features (tumor volume and diameter) and clinical parameters (e.g. performance status, overall stage). The prognostic performance was evaluated using the concordance index (CI). Multivariate model performance was evaluated using cross validation. All p-values were corrected for multiple testing using the false discovery rate. Results: Radiomic features were associated with DM (one feature), LRR (one feature) and survival (four features). Conventional features were only associated with survival and one clinical parameter was associated with LRR and survival. One radiomic feature was significantly prognostic for DM (CI=0.670, p<0.1 from random), while none of the conventional and clinical parameters were significant for DM. The multivariate radiomic model had a higher median CI (0.671) for DM than the conventional (0.618) and clinical models (0.617). Conclusion: Radiomic features have potential to be imaging biomarkers for clinical outcomes that conventional imaging metrics and clinical parameters cannot predict in SBRT patients, such as distant metastasis. Development of a radiomics biomarker that can identify patients at high-risk of recurrence could facilitate personalization of their treatment regimen for an optimized clinical outcome. R.M. had consulting interest with Amgen (ended in 2015).« less
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
Social Image Tag Ranking by Two-View Learning
NASA Astrophysics Data System (ADS)
Zhuang, Jinfeng; Hoi, Steven C. H.
Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.
Safety factor profiles from spectral motional Stark effect for ITER applications
NASA Astrophysics Data System (ADS)
Ko, Jinseok; Chung, Jinil; Wi, Han Min
2017-10-01
Depositions on the first mirror and multiple reflections on the other mirrors in the labyrinth of the optical system in the motional Stark effect (MSE) diagnostic for ITER are regarded as one of the main obstacles to overcome. One of the alternatives to the present-day conventional photoelastic-modulation-based MSE principles is the spectroscopic analyses on the motional Stark emissions where either the ratios among individual Stark multiplets or the amount of the Stark split are measured based on precise and accurate atomic data and models to ultimately provide the critical internal constraints in the magnetic equilibrium reconstruction. Equipped with the PEM-based conventional MSE hardware since 2015, the KSTAR MSE diagnostic system is capable of investigating the feasibility of the spectroscopic MSE approach particularly via comparative studies with the PEM approach. Available atomic data and models are used to analyze the beam emission spectra with a high-spectral-resolution spectrometer with a patent-pending dispersion calibration technology. Experimental validation on the atomic data and models is discussed in association with the effect of the existence of mirrors, the Faraday rotation in the relay optics media, and the background polarized light on the measured spectra. Work supported by the Ministry of Science, ICT and Future Planning, Korea.
Design principles for shift current photovoltaics
Cook, Ashley M.; M. Fregoso, Benjamin; de Juan, Fernando; ...
2017-01-25
While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. Furthermore, by analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. This method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenidesmore » such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W -1 . Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells.« less
Design principles for shift current photovoltaics
Cook, Ashley M.; M. Fregoso, Benjamin; de Juan, Fernando; Coh, Sinisa; Moore, Joel E.
2017-01-01
While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. By analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. Our method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenides such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W−1. Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells. PMID:28120823
Design principles for shift current photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Ashley M.; M. Fregoso, Benjamin; de Juan, Fernando
While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. Furthermore, by analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. This method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenidesmore » such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W -1 . Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells.« less
Dissipation in microwave quantum circuits with hybrid nanowire Josephson elements
NASA Astrophysics Data System (ADS)
Mugnai, D.; Ranfagni, A.; Agresti, A.
2017-04-01
Recent experiments on hybrid Josephson junctions have made the argument a topical subject. However, a quantity which remains still unknown is the tunneling (or response) time, which is strictly connected to the role that dissipation plays in the dynamics of the complete system. A simple way for evaluating dissipation in microwave circuits, previously developed for describing the dynamics of conventional Josephson junctions, is now presented as suitable for application even to non-conventional junctions. The method is based on a stochastic model, as derived from the telegrapher's equation, and is particularly devoted to the case of junctions loaded by real transmission lines. When the load is constituted by lumped-constant circuits, a connection with the stochastic model is also maintained. The theoretical model demonstrated its ability to analyze both classically-allowed and forbidden processes, and has found a wide field of applicability, namely in all cases in which dissipative effects cannot be ignored.
An implementation of 7E Learning Cycle Model to Improve Student Self-esteem
NASA Astrophysics Data System (ADS)
Firdaus, F.; Priatna, N.; Suhendra, S.
2017-09-01
One of the affective factors that affect student learning outcomes is student self-esteem in mathematics, learning achievement and self-esteem influence each other. The purpose of this research is to know whether self-esteem students who get 7E learning cycle model is better than students who get conventional learning. This research method is a non-control group design. Based on the results obtained that the normal and homogeneous data so that the t test and from the test results showed there are significant differences in self-esteem students learning with 7E learning cycle model compared with students who get conventional learning. The implications of the results of this study are that students should be required to conduct many discussions, presentations and evaluations on classroom activities as these learning stages can improve students’ self-esteem especially pride in the results achieved.
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
NASA Astrophysics Data System (ADS)
Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir
2017-10-01
In this article, we describe an innovative non-invasive method of Fetal Phonocardiography (fPCG) using fiber-optic sensors and adaptive algorithm for the measurement of fetal heart rate (fHR). Conventional PCG is based on a noninvasive scanning of acoustic signals by means of a microphone placed on the thorax. As for fPCG, the microphone is placed on the maternal abdomen. Our solution is based on patent pending non-invasive scanning of acoustic signals by means of a fiber-optic interferometer. Fiber-optic sensors are resistant to technical artifacts such as electromagnetic interferences (EMI), thus they can be used in situations where it is impossible to use conventional EFM methods, e.g. during Magnetic Resonance Imaging (MRI) examination or in case of delivery in water. The adaptive evaluation system is based on Recursive least squares (RLS) algorithm. Based on real measurements provided on five volunteers with their written consent, we created a simplified dynamic signal model of a distribution of heartbeat sounds (HS) through the human body. Our created model allows us to verification of the proposed adaptive system RLS algorithm. The functionality of the proposed non-invasive adaptive system was verified by objective parameters such as Sensitivity (S+) and Signal to Noise Ratio (SNR).
Analysis of rocket engine injection combustion processes
NASA Technical Reports Server (NTRS)
Salmon, J. W.
1976-01-01
A critique is given of the JANNAF sub-critical propellant injection/combustion process analysis computer models and application of the models to correlation of well documented hot fire engine data bases. These programs are the distributed energy release (DER) model for conventional liquid propellants injectors and the coaxial injection combustion model (CICM) for gaseous annulus/liquid core coaxial injectors. The critique identifies model inconsistencies while the computer analyses provide quantitative data on predictive accuracy. The program is comprised of three tasks: (1) computer program review and operations; (2) analysis and data correlations; and (3) documentation.
Vibration control of beams using stand-off layer damping: finite element modeling and experiments
NASA Astrophysics Data System (ADS)
Chaudry, A.; Baz, A.
2006-03-01
Damping treatments with stand-off layer (SOL) have been widely accepted as an attractive alternative to conventional constrained layer damping (CLD) treatments. Such an acceptance stems from the fact that the SOL, which is simply a slotted spacer layer sandwiched between the viscoelastic layer and the base structure, acts as a strain magnifier that considerably amplifies the shear strain and hence the energy dissipation characteristics of the viscoelastic layer. Accordingly, more effective vibration suppression can be achieved by using SOL as compared to employing CLD. In this paper, a comprehensive finite element model of the stand-off layer constrained damping treatment is developed. The model accounts for the geometrical and physical parameters of the slotted SOL, the viscoelastic, layer the constraining layer, and the base structure. The predictions of the model are validated against the predictions of a distributed transfer function model and a model built using a commercial finite element code (ANSYS). Furthermore, the theoretical predictions are validated experimentally for passive SOL treatments of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the CLD with SOL in enhancing the energy dissipation as compared to the conventional CLD. Extension of the proposed one-dimensional CLD with SOL to more complex structures is a natural extension to the present study.
Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C
2012-08-01
To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.
Simulating lifetime outcomes associated with complications for people with type 1 diabetes.
Lung, Tom W C; Clarke, Philip M; Hayes, Alison J; Stevens, Richard J; Farmer, Andrew
2013-06-01
The aim of this study was to develop a discrete-time simulation model for people with type 1 diabetes mellitus, to estimate and compare mean life expectancy and quality-adjusted life-years (QALYs) over a lifetime between intensive and conventional blood glucose treatment groups. We synthesized evidence on type 1 diabetes patients using several published sources. The simulation model was based on 13 equations to estimate risks of events and mortality. Cardiovascular disease (CVD) risk was obtained from results of the DCCT (diabetes control and complications trial). Mortality post-CVD event was based on a study using linked administrative data on people with diabetes from Western Australia. Information on incidence of renal disease and the progression to CVD was obtained from studies in Finland and Italy. Lower-extremity amputation (LEA) risk was based on the type 1 diabetes Swedish inpatient registry, and the risk of blindness was obtained from results of a German-based study. Where diabetes-specific data were unavailable, information from other populations was used. We examine the degree and source of parameter uncertainty and illustrate an application of the model in estimating lifetime outcomes of using intensive and conventional treatments for blood glucose control. From 15 years of age, male and female patients had an estimated life expectancy of 47.2 (95 % CI 35.2-59.2) and 52.7 (95 % CI 41.7-63.6) years in the intensive treatment group. The model produced estimates of the lifetime benefits of intensive treatment for blood glucose from the DCCT of 4.0 (95 % CI 1.2-6.8) QALYs for women and 4.6 (95 % CI 2.7-6.9) QALYs for men. Absolute risk per 1,000 person-years for fatal CVD events was simulated to be 1.37 and 2.51 in intensive and conventional treatment groups, respectively. The model incorporates diabetic complications risk data from a type 1 diabetes population and synthesizes other type 1-specific data to estimate long-term outcomes of CVD, end-stage renal disease, LEA and risk of blindness, along with life expectancy and QALYs. External validation was carried out using life expectancy and absolute risk for fatal CVD events. Because of the flexible and transparent nature of the model, it has many potential future applications.
Audio-frequency analysis of inductive voltage dividers based on structural models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramov, S.; Oldham, N.M.; Koffman, A.D.
1994-12-31
A Binary Inductive Voltage Divider (BIVD) is compared with a Decade Inductive Voltage Divider (DIVD) in an automatic IVD bridge. New detection and injection circuitry was designed and used to evaluate the IVDs with either the input or output tied to ground potential. In the audio frequency range the DIVD and BIVD error patterns are characterized for both in-phase and quadrature components. Differences between results obtained using a new error decomposition scheme based on structural modeling, and measurements using conventional IVD standards are reported.
Single particle analysis based on Zernike phase contrast transmission electron microscopy.
Danev, Radostin; Nagayama, Kuniaki
2008-02-01
We present the first application of Zernike phase-contrast transmission electron microscopy to single-particle 3D reconstruction of a protein, using GroEL chaperonin as the test specimen. We evaluated the performance of the technique by comparing 3D models derived from Zernike phase contrast imaging, with models from conventional underfocus phase contrast imaging. The same resolution, about 12A, was achieved by both imaging methods. The reconstruction based on Zernike phase contrast data required about 30% fewer particles. The advantages and prospects of each technique are discussed.
Chen, Branson; Lee, Jong Bok; Kang, Hyeonjeong; Minden, Mark D; Zhang, Li
2018-04-24
While conventional chemotherapy is effective at eliminating the bulk of leukemic cells, chemotherapy resistance in acute myeloid leukemia (AML) is a prevalent problem that hinders conventional therapies and contributes to disease relapse, and ultimately patient death. We have recently shown that allogeneic double negative T cells (DNTs) are able to target the majority of primary AML blasts in vitro and in patient-derived xenograft models. However, some primary AML blast samples are resistant to DNT cell therapy. Given the differences in the modes of action of DNTs and chemotherapy, we hypothesize that DNT therapy can be used in combination with conventional chemotherapy to further improve their anti-leukemic effects and to target chemotherapy-resistant disease. Drug titration assays and flow-based cytotoxicity assays using ex vivo expanded allogeneic DNTs were performed on multiple AML cell lines to identify therapy-resistance. Primary AML samples were also tested to validate our in vitro findings. Further, a xenograft model was employed to demonstrate the feasibility of combining conventional chemotherapy and adoptive DNT therapy to target therapy-resistant AML. Lastly, blocking assays with neutralizing antibodies were employed to determine the mechanism by which chemotherapy increases the susceptibility of AML to DNT-mediated cytotoxicity. Here, we demonstrate that KG1a, a stem-like AML cell line that is resistant to DNTs and chemotherapy, and chemotherapy-resistant primary AML samples both became more susceptible to DNT-mediated cytotoxicity in vitro following pre-treatment with daunorubicin. Moreover, chemotherapy treatment followed by adoptive DNT cell therapy significantly decreased bone marrow engraftment of KG1a in a xenograft model. Mechanistically, daunorubicin increased the expression of NKG2D and DNAM-1 ligands on KG1a; blocking of these pathways attenuated DNT-mediated cytotoxicity. Our results demonstrate the feasibility and benefit of using DNTs as an immunotherapy after the administration of conventional chemotherapy.
Modeling and testing of a tube-in-tube separation mechanism of bodies in space
NASA Astrophysics Data System (ADS)
Michaels, Dan; Gany, Alon
2016-12-01
A tube-in-tube concept for separation of bodies in space was investigated theoretically and experimentally. The separation system is based on generation of high pressure gas by combustion of solid propellant and restricting the expansion of the gas only by ejecting the two bodies in opposite directions, in such a fashion that maximizes generated impulse. An interior ballistics model was developed in order to investigate the potential benefits of the separation system for a large range of space body masses and for different design parameters such as geometry and propellant. The model takes into account solid propellant combustion, heat losses, and gas phase chemical reactions. The model shows that for large bodies (above 100 kg) and typical separation velocities of 5 m/s, the proposed separation mechanism may be characterized by a specific impulse of 25,000 s, two order of magnitude larger than that of conventional solid rockets. It means that the proposed separation system requires only 1% of the propellant mass that would be needed for a conventional rocket for the same mission. Since many existing launch vehicles obtain such separation velocities by using conventional solid rocket motors (retro-rockets), the implementation of the new separation system design can reduce dramatically the mass of the separation system and increase safety. A dedicated experimental setup was built in order to demonstrate the concept and validate the model. The experimental results revealed specific impulse values of up to 27,000 s and showed good correspondence with the model.
Finite element analysis of container ship's cargo hold using ANSYS and POSEIDON software
NASA Astrophysics Data System (ADS)
Tanny, Tania Tamiz; Akter, Naznin; Amin, Osman Md.
2017-12-01
Nowadays ship structural analysis has become an integral part of the preliminary ship design providing further support for the development and detail design of ship structures. Structural analyses of container ship's cargo holds are carried out for the balancing of their safety and capacity, as those ships are exposed to the high risk of structural damage during voyage. Two different design methodologies have been considered for the structural analysis of a container ship's cargo hold. One is rule-based methodology and the other is a more conventional software based analyses. The rule based analysis is done by DNV-GL's software POSEIDON and the conventional package based analysis is done by ANSYS structural module. Both methods have been applied to analyze some of the mechanical properties of the model such as total deformation, stress-strain distribution, Von Mises stress, Fatigue etc., following different design bases and approaches, to indicate some guidance's for further improvements in ship structural design.
Chaos in a dynamic model of traffic flows in an origin-destination network.
Zhang, Xiaoyan; Jarrett, David F.
1998-06-01
In this paper we investigate the dynamic behavior of road traffic flows in an area represented by an origin-destination (O-D) network. Probably the most widely used model for estimating the distribution of O-D flows is the gravity model, [J. de D. Ortuzar and L. G. Willumsen, Modelling Transport (Wiley, New York, 1990)] which originated from an analogy with Newton's gravitational law. The conventional gravity model, however, is static. The investigation in this paper is based on a dynamic version of the gravity model proposed by Dendrinos and Sonis by modifying the conventional gravity model [D. S. Dendrinos and M. Sonis, Chaos and Social-Spatial Dynamics (Springer-Verlag, Berlin, 1990)]. The dynamic model describes the variations of O-D flows over discrete-time periods, such as each day, each week, and so on. It is shown that when the dimension of the system is one or two, the O-D flow pattern either approaches an equilibrium or oscillates. When the dimension is higher, the behavior found in the model includes equilibria, oscillations, periodic doubling, and chaos. Chaotic attractors are characterized by (positive) Liapunov exponents and fractal dimensions.(c) 1998 American Institute of Physics.
Gain degradation and amplitude scintillation due to tropospheric turbulence
NASA Technical Reports Server (NTRS)
Theobold, D. M.; Hodge, D. B.
1978-01-01
It is shown that a simple physical model is adequate for the prediction of the long term statistics of both the reduced signal levels and increased peak-to-peak fluctuations. The model is based on conventional atmospheric turbulence theory and incorporates both amplitude and angle of arrival fluctuations. This model predicts the average variance of signals observed under clear air conditions at low elevation angles on earth-space paths at 2, 7.3, 20 and 30 GHz. Design curves based on this model for gain degradation, realizable gain, amplitude fluctuation as a function of antenna aperture size, frequency, and either terrestrial path length or earth-space path elevation angle are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Yasin; Mathur, Jyotirmay; Bhandari, Mahabir S
2016-01-01
The paper describes a case study of an information technology office building with a radiant cooling system and a conventional variable air volume (VAV) system installed side by side so that performancecan be compared. First, a 3D model of the building involving architecture, occupancy, and HVAC operation was developed in EnergyPlus, a simulation tool. Second, a different calibration methodology was applied to develop the base case for assessing the energy saving potential. This paper details the calibration of the whole building energy model to the component level, including lighting, equipment, and HVAC components such as chillers, pumps, cooling towers, fans,more » etc. Also a new methodology for the systematic selection of influence parameter has been developed for the calibration of a simulated model which requires large time for the execution. The error at the whole building level [measured in mean bias error (MBE)] is 0.2%, and the coefficient of variation of root mean square error (CvRMSE) is 3.2%. The total errors in HVAC at the hourly are MBE = 8.7% and CvRMSE = 23.9%, which meet the criteria of ASHRAE 14 (2002) for hourly calibration. Different suggestions have been pointed out to generalize the energy saving of radiant cooling system through the existing building system. So a base case model was developed by using the calibrated model for quantifying the energy saving potential of the radiant cooling system. It was found that a base case radiant cooling system integrated with DOAS can save 28% energy compared with the conventional VAV system.« less
NASA Astrophysics Data System (ADS)
Wang, T.-L.; Michta, D.; Lindberg, R. R.; Charman, A. E.; Martins, S. F.; Wurtele, J. S.
2009-12-01
Results are reported of a one-dimensional simulation study comparing the modeling capability of a recently formulated extended three-wave model [R. R. Lindberg, A. E. Charman, and J. S. Wurtele, Phys. Plasmas 14, 122103 (2007); Phys. Plasmas 15, 055911 (2008)] to that of a particle-in-cell (PIC) code, as well as to a more conventional three-wave model, in the context of the plasma-based backward Raman amplification (PBRA) [G. Shvets, N. J. Fisch, A. Pukhov et al., Phys. Rev. Lett. 81, 4879 (1998); V. M. Malkin, G. Shvets, and N. J. Fisch, Phys. Rev. Lett. 82, 4448 (1999); Phys. Rev. Lett. 84, 1208 (2000)]. The extended three-wave model performs essentially as well as or better than a conventional three-wave description in all temperature regimes tested, and significantly better at the higher temperatures studied, while the computational savings afforded by the extended three-wave model make it a potentially attractive tool that can be used prior to or in conjunction with PIC simulations to model the kinetic effects of PBRA for nonrelativistic laser pulses interacting with underdense thermal plasmas. Very fast but reasonably accurate at moderate plasma temperatures, this model may be used to perform wide-ranging parameter scans or other exploratory analyses quickly and efficiently, in order to guide subsequent simulation via more accurate if intensive PIC techniques or other algorithms approximating the full Vlasov-Maxwell equations.
ERIC Educational Resources Information Center
Sidabutar, Ropinus
2016-01-01
The research was aimed to investigate the effect of various, innovated teaching models to improved the student's achievement in various topic in Mathematics. The study was conduct experiment by using innovated teaching with contextual, media and web which are the compared. with conventional teaching method. The result showed the innovation in the…
Tressou, Jessica; Ben Abdallah, Nadia; Planche, Christelle; Dervilly-Pinel, Gaud; Sans, Pierre; Engel, Erwan; Albert, Isabelle
2017-12-01
In this paper, exposure to Polychlorinated biphenyls (PCBs) related to bovine meat consumption is assessed based on multiples sources of data, namely data collected within the national research project "SoMeat" that objectively assesses the potential risks and benefits of organic and conventional food production systems in terms of contaminants respective contents. The work focuses on dioxin like PCBs in bovine meat in France. A modular Bayesian approach is proposed including measures after production, effect of cooking, levels and frequency of consumption and effect of digestion. In each module, a model is built and prior information can be integrated through previously acquired data commonly used in food risk assessment or vague priors. The output of the global model is the exposure including both production modes (organic and conventional) for three different cooking intensities (rare, medium, and well-done), before digestion and after digestion. The main results show that organic meat is more contaminated than conventional meat in mean after production stage and after cooking although cooking reduces the contamination level. This work is a first step of refined risk assessment integrating different steps such as cooking and digestion in the context of chemical risk assessment similarly to current microbiological risk assessments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gong, Rui; Xu, Haisong; Tong, Qingfen
2012-10-20
The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.
Design-based and model-based inference in surveys of freshwater mollusks
Dorazio, R.M.
1999-01-01
Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.
Nithiananthan, Sajendra; Schafer, Sebastian; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Reh, Douglas D; Gallia, Gary L; Siewerdsen, Jeffrey H
2012-09-01
A deformable registration method capable of accounting for missing tissue (e.g., excision) is reported for application in cone-beam CT (CBCT)-guided surgical procedures. Excisions are identified by a segmentation step performed simultaneous to the registration process. Tissue excision is explicitly modeled by increasing the dimensionality of the deformation field to allow motion beyond the dimensionality of the image. The accuracy of the model is tested in phantom, simulations, and cadaver models. A variant of the Demons deformable registration algorithm is modified to include excision segmentation and modeling. Segmentation is performed iteratively during the registration process, with initial implementation using a threshold-based approach to identify voxels corresponding to "tissue" in the moving image and "air" in the fixed image. With each iteration of the Demons process, every voxel is assigned a probability of excision. Excisions are modeled explicitly during registration by increasing the dimensionality of the deformation field so that both deformations and excisions can be accounted for by in- and out-of-volume deformations, respectively. The out-of-volume (i.e., fourth) component of the deformation field at each voxel carries a magnitude proportional to the excision probability computed in the excision segmentation step. The registration accuracy of the proposed "extra-dimensional" Demons (XDD) and conventional Demons methods was tested in the presence of missing tissue in phantom models, simulations investigating the effect of excision size on registration accuracy, and cadaver studies emulating realistic deformations and tissue excisions imparted in CBCT-guided endoscopic skull base surgery. Phantom experiments showed the normalized mutual information (NMI) in regions local to the excision to improve from 1.10 for the conventional Demons approach to 1.16 for XDD, and qualitative examination of the resulting images revealed major differences: the conventional Demons approach imparted unrealistic distortions in areas around tissue excision, whereas XDD provided accurate "ejection" of voxels within the excision site and maintained the registration accuracy throughout the rest of the image. Registration accuracy in areas far from the excision site (e.g., > ∼5 mm) was identical for the two approaches. Quantitation of the effect was consistent in analysis of NMI, normalized cross-correlation (NCC), target registration error (TRE), and accuracy of voxels ejected from the volume (true-positive and false-positive analysis). The registration accuracy for conventional Demons was found to degrade steeply as a function of excision size, whereas XDD was robust in this regard. Cadaver studies involving realistic excision of the clivus, vidian canal, and ethmoid sinuses demonstrated similar results, with unrealistic distortion of anatomy imparted by conventional Demons and accurate ejection and deformation for XDD. Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate "ejection" of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance.
Extra-dimensional Demons: A method for incorporating missing tissue in deformable image registration
Nithiananthan, Sajendra; Schafer, Sebastian; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Reh, Douglas D.; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: A deformable registration method capable of accounting for missing tissue (e.g., excision) is reported for application in cone-beam CT (CBCT)-guided surgical procedures. Excisions are identified by a segmentation step performed simultaneous to the registration process. Tissue excision is explicitly modeled by increasing the dimensionality of the deformation field to allow motion beyond the dimensionality of the image. The accuracy of the model is tested in phantom, simulations, and cadaver models. Methods: A variant of the Demons deformable registration algorithm is modified to include excision segmentation and modeling. Segmentation is performed iteratively during the registration process, with initial implementation using a threshold-based approach to identify voxels corresponding to “tissue” in the moving image and “air” in the fixed image. With each iteration of the Demons process, every voxel is assigned a probability of excision. Excisions are modeled explicitly during registration by increasing the dimensionality of the deformation field so that both deformations and excisions can be accounted for by in- and out-of-volume deformations, respectively. The out-of-volume (i.e., fourth) component of the deformation field at each voxel carries a magnitude proportional to the excision probability computed in the excision segmentation step. The registration accuracy of the proposed “extra-dimensional” Demons (XDD) and conventional Demons methods was tested in the presence of missing tissue in phantom models, simulations investigating the effect of excision size on registration accuracy, and cadaver studies emulating realistic deformations and tissue excisions imparted in CBCT-guided endoscopic skull base surgery. Results: Phantom experiments showed the normalized mutual information (NMI) in regions local to the excision to improve from 1.10 for the conventional Demons approach to 1.16 for XDD, and qualitative examination of the resulting images revealed major differences: the conventional Demons approach imparted unrealistic distortions in areas around tissue excision, whereas XDD provided accurate “ejection” of voxels within the excision site and maintained the registration accuracy throughout the rest of the image. Registration accuracy in areas far from the excision site (e.g., > ∼5 mm) was identical for the two approaches. Quantitation of the effect was consistent in analysis of NMI, normalized cross-correlation (NCC), target registration error (TRE), and accuracy of voxels ejected from the volume (true-positive and false-positive analysis). The registration accuracy for conventional Demons was found to degrade steeply as a function of excision size, whereas XDD was robust in this regard. Cadaver studies involving realistic excision of the clivus, vidian canal, and ethmoid sinuses demonstrated similar results, with unrealistic distortion of anatomy imparted by conventional Demons and accurate ejection and deformation for XDD. Conclusions: Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate “ejection” of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance. PMID:22957637
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Akça, Kıvanç; Eser, Atılım; Çavuşoğlu, Yeliz; Sağırkaya, Elçin; Çehreli, Murat Cavit
2015-05-01
The aim of this study was to investigate conventionally and early loaded titanium and titanium-zirconium alloy implants by three-dimensional finite element stress analysis. Three-dimensional model of a dental implant was created and a thread area was established as a region of interest in trabecular bone to study a localized part of the global model with a refined mesh. The peri-implant tissues around conventionally loaded (model 1) and early loaded (model 2) implants were implemented and were used to explore principal stresses, displacement values, and equivalent strains in the peri-implant region of titanium and titanium-zirconium implants under static load of 300 N with or without 30° inclination applied on top of the abutment surface. Under axial loading, principal stresses in both models were comparable for both implants and models. Under oblique loading, principal stresses around titanium-zirconium implants were slightly higher in both models. Comparable stress magnitudes were observed in both models. The displacement values and equivalent strain amplitudes around both implants and models were similar. Peri-implant bone around titanium and titanium-zirconium implants experiences similar stress magnitudes coupled with intraosseous implant displacement values under conventional loading and early loading simulations. Titanium-zirconium implants have biomechanical outcome comparable to conventional titanium implants under conventional loading and early loading.
Application of new radio tracking data types to critical spacecraft navigation problems
NASA Technical Reports Server (NTRS)
Ondrasik, V. J.; Rourke, K. H.
1972-01-01
Earth-based radio tracking data types are considered, which involve simultaneous or nearly simultaneous spacecraft tracking from widely separated tracking stations. These data types are conventional tracking instrumentation analogs of the very long baseline interferometry (VLBI) of radio astronomy-hence the name quasi-VLBI. A preliminary analysis of quasi-VLBI is presented using simplified tracking data models. The results of accuracy analyses are presented for a representative mission, Viking 1975. The results indicate that, contingent on projected tracking system accuracy, quasi-VLBI can be expected to significantly improve navigation performance over that expected from conventional tracking data types.
Worldwide multi-model intercomparison of clear-sky solar irradiance predictions
NASA Astrophysics Data System (ADS)
Ruiz-Arias, Jose A.; Gueymard, Christian A.; Cebecauer, Tomas
2017-06-01
Accurate modeling of solar radiation in the absence of clouds is highly important because solar power production peaks during cloud-free situations. The conventional validation approach of clear-sky solar radiation models relies on the comparison between model predictions and ground observations. Therefore, this approach is limited to locations with availability of high-quality ground observations, which are scarce worldwide. As a consequence, many areas of in-terest for, e.g., solar energy development, still remain sub-validated. Here, a worldwide inter-comparison of the global horizontal irradiance (GHI) and direct normal irradiance (DNI) calculated by a number of appropriate clear-sky solar ra-diation models is proposed, without direct intervention of any weather or solar radiation ground-based observations. The model inputs are all gathered from atmospheric reanalyses covering the globe. The model predictions are compared to each other and only their relative disagreements are quantified. The largest differences between model predictions are found over central and northern Africa, the Middle East, and all over Asia. This coincides with areas of high aerosol optical depth and highly varying aerosol distribution size. Overall, the differences in modeled DNI are found about twice larger than for GHI. It is argued that the prevailing weather regimes (most importantly, aerosol conditions) over regions exhibiting substantial divergences are not adequately parameterized by all models. Further validation and scrutiny using conventional methods based on ground observations should be pursued in priority over those specific regions to correctly evaluate the performance of clear-sky models, and select those that can be recommended for solar concentrating applications in particular.
A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas
NASA Technical Reports Server (NTRS)
Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.
1986-01-01
An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
A Model-based B2B (Batch to Batch) Control for An Industrial Batch Polymerization Process
NASA Astrophysics Data System (ADS)
Ogawa, Morimasa
This paper describes overview of a model-based B2B (batch to batch) control for an industrial batch polymerization process. In order to control the reaction temperature precisely, several methods based on the rigorous process dynamics model are employed at all design stage of the B2B control, such as modeling and parameter estimation of the reaction kinetics which is one of the important part of the process dynamics model. The designed B2B control consists of the gain scheduled I-PD/II2-PD control (I-PD with double integral control), the feed-forward compensation at the batch start time, and the model adaptation utilizing the results of the last batch operation. Throughout the actual batch operations, the B2B control provides superior control performance compared with that of conventional control methods.
Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.
The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.
Solberg, K; Heinemann, F; Pellikaan, P; Keilig, L; Stark, H; Bourauel, C; Hasan, I
2017-05-01
The effect of implants' number on overdenture stability and stress distribution in edentulous mandible, implants and overdenture was numerically investigated for implant-supported overdentures. Three models were constructed. Overdentures were connected to implants by means of ball head abutments and rubber ring. In model 1, the overdenture was retained by two conventional implants; in model 2, by four conventional implants; and in model 3, by five mini implants. The overdenture was subjected to a symmetrical load at an angle of 20 degrees to the overdenture at the canine regions and vertically at the first molars. Four different loading conditions with two total forces (120, 300 N) were considered for the numerical analysis. The overdenture displacement was about 2.2 times higher when five mini implants were used rather than four conventional implants. The lowest stress in bone bed was observed with four conventional implants. Stresses in bone were reduced by 61% in model 2 and by 6% in model 3 in comparison to model 1. The highest stress was observed with five mini implants. Stresses in implants were reduced by 76% in model 2 and 89% increased in model 3 compared to model 1. The highest implant displacement was observed with five mini implants. Implant displacements were reduced by 29% in model 2, and increased by 273% in model 3 compared to model 1. Conventional implants proved better stability for overdenture than mini implants. Regardless the type and number of implants, the stress within the bone and implants are below the critical limits.
NASA Experimental Program to Stimulate Competitive Research: South Carolina
NASA Technical Reports Server (NTRS)
Sutton, Michael A.
2004-01-01
The use of an appropriate relationship model is critical for reliable prediction of future urban growth. Identification of proper variables and mathematic functions and determination of the weights or coefficients are the key tasks for building such a model. Although the conventional logistic regression model is appropriate for handing land use problems, it appears insufficient to address the issue of interdependency of the predictor variables. This study used an alternative approach to simulation and modeling urban growth using artificial neural networks. It developed an operational neural network model trained using a robust backpropagation method. The model was applied in the Myrtle Beach region of South Carolina, and tested with both global datasets and areal datasets to examine the strength of both regional models and areal models. The results indicate that the neural network model not only has many theoretic advantages over other conventional mathematic models in representing the complex urban systems, but also is practically superior to the logistic model in its capability to predict urban growth with better - accuracy and less variation. The neural network model is particularly effective in terms of successfully identifying urban patterns in the rural areas where the logistic model often falls short. It was also found from the area-based tests that there are significant intra-regional differentiations in urban growth with different rules and rates. This suggests that the global modeling approach, or one model for the entire region, may not be adequate for simulation of a urban growth at the regional scale. Future research should develop methods for identification and subdivision of these areas and use a set of area-based models to address the issues of multi-centered, intra- regionally differentiated urban growth.
Feasibility and operating costs of an air cycle for CCHP in a fast food restaurant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Blanco, Horacio; Vineyard, Edward
This work considers the possibilities of an air-based Brayton cycle to provide the power, heating and cooling needs of fast-food restaurants. A model of the cycle based on conventional turbomachinery loss coefficients is formulated. The heating, cooling and power capabilities of the cycle are extracted from simulation results. Power and thermal loads for restaurants in Knoxville, TN and in International Falls, MN, are considered. It is found that the cycle can meet the loads by setting speed and mass flow-rate apportionment between the power and cooling functional sections. The associated energy costs appear elevated when compared to the cost ofmore » operating individual components or a more conventional, absorption-based CHP system. Lastly, a first-order estimate of capital investments is provided. Suggestions for future work whereby the operational costs could be reduced are given in the conclusions.« less
Modulating Thin Film Transistor Characteristics by Texturing the Gate Metal.
Nair, Aswathi; Bhattacharya, Prasenjit; Sambandan, Sanjiv
2017-12-20
The development of reliable, high performance integrated circuits based on thin film transistors (TFTs) is of interest for the development of flexible electronic circuits. In this work we illustrate the modulation of TFT transconductance via the texturing of the gate metal created by the addition of a conductive pattern on top of a planar gate. Texturing results in the semiconductor-insulator interface acquiring a non-planar geometry with local variations in the radius of curvature. This influences various TFT parameters such as the subthreshold slope, gate voltage at the onset of conduction, contact resistance and gate capacitance. Specific studies are performed on textures based on periodic striations oriented along different directions. Textured TFTs showed upto ±40% variation in transconductance depending on the texture orientation as compared to conventional planar gate TFTs. Analytical models are developed and compared with experiments. Gain boosting in common source amplifiers based on textured TFTs as compared to conventional TFTs is demonstrated.
Feasibility and operating costs of an air cycle for CCHP in a fast food restaurant
Perez-Blanco, Horacio; Vineyard, Edward
2016-05-06
This work considers the possibilities of an air-based Brayton cycle to provide the power, heating and cooling needs of fast-food restaurants. A model of the cycle based on conventional turbomachinery loss coefficients is formulated. The heating, cooling and power capabilities of the cycle are extracted from simulation results. Power and thermal loads for restaurants in Knoxville, TN and in International Falls, MN, are considered. It is found that the cycle can meet the loads by setting speed and mass flow-rate apportionment between the power and cooling functional sections. The associated energy costs appear elevated when compared to the cost ofmore » operating individual components or a more conventional, absorption-based CHP system. Lastly, a first-order estimate of capital investments is provided. Suggestions for future work whereby the operational costs could be reduced are given in the conclusions.« less
Choice of optical system is critical for the security of double random phase encryption systems
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Cassidy, Derek; Zhao, Liang; Ryle, James P.; Healy, John J.; Sheridan, John T.
2017-06-01
The linear canonical transform (LCT) is used in modeling a coherent light-field propagation through first-order optical systems. Recently, a generic optical system, known as the quadratic phase encoding system (QPES), for encrypting a two-dimensional image has been reported. In such systems, two random phase keys and the individual LCT parameters (α,β,γ) serve as secret keys of the cryptosystem. It is important that such encryption systems also satisfy some dynamic security properties. We, therefore, examine such systems using two cryptographic evaluation methods, the avalanche effect and bit independence criterion, which indicate the degree of security of the cryptographic algorithms using QPES. We compared our simulation results with the conventional Fourier and the Fresnel transform-based double random phase encryption (DRPE) systems. The results show that the LCT-based DRPE has an excellent avalanche and bit independence characteristics compared to the conventional Fourier and Fresnel-based encryption systems.
Leung, Victoria C; Pechlivanoglou, Petros; Chew, Hall F; Hatch, Wendy
2017-08-01
To use patient-level microsimulation models to evaluate the comparative cost-effectiveness of early corneal cross-linking (CXL) and conventional management with penetrating keratoplasty (PKP) when indicated in managing keratoconus in Canada. Cost-utility analysis using individual-based, state-transition microsimulation models. Simulated cohorts of 100 000 individuals with keratoconus who entered each treatment arm at 25 years of age. Fellow eyes were modeled separately. Simulated individuals lived up to a maximum of 110 years. We developed 2 state-transition microsimulation models to reflect the natural history of keratoconus progression and the impact of conventional management with PKP versus CXL. We collected data from the published literature to inform model parameters. We used realistic parameters that maximized the potential costs and complications of CXL, while minimizing those associated with PKP. In each treatment arm, we allowed simulated individuals to move through health states in monthly cycles from diagnosis until death. For each treatment strategy, we calculated the total cost and number of quality-adjusted life years (QALYs) gained. Costs were measured in Canadian dollars. Costs and QALYs were discounted at 5%, converting future costs and QALYs into present values. We used an incremental cost-effectiveness ratio (ICER = difference in lifetime costs/difference in lifetime health outcomes) to compare the cost-effectiveness of CXL versus conventional management with PKP. Lifetime costs and QALYs for CXL were estimated to be Can$5530 (Can$4512, discounted) and 50.12 QALYs (16.42 QALYs, discounted). Lifetime costs and QALYs for conventional management with PKP were Can$2675 (Can$1508, discounted) and 48.93 QALYs (16.09 QALYs, discounted). The discounted ICER comparing CXL to conventional management was Can$9090/QALY gained. Sensitivity analyses revealed that in general, parameter variations did not influence the cost-effectiveness of CXL. CXL is cost-effective compared with conventional management with PKP in the treatment of keratoconus. Our ICER of Can$9090/QALY falls well below the range of Can$20 000 to Can$100 000/QALY and below US$50 000/QALY, thresholds generally used to evaluate the cost-effectiveness of health interventions in Canada and the United States. This study provides strong economic evidence for the cost-effectiveness of early CXL in keratoconus. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Dynamic Characterization and Modeling of Potting Materials for Electronics Assemblies
NASA Astrophysics Data System (ADS)
Joshi, Vasant; Lee, Gilbert; Santiago, Jaime
2015-06-01
Prediction of survivability of encapsulated electronic components subject to impact relies on accurate modeling. Both static and dynamic characterization of encapsulation material is needed to generate a robust material model. Current focus is on potting materials to mitigate high rate loading on impact. In this effort, encapsulation scheme consists of layers of polymeric material Sylgard 184 and Triggerbond Epoxy-20-3001. Experiments conducted for characterization of materials include conventional tension and compression tests, Hopkinson bar, dynamic material analyzer (DMA) and a non-conventional accelerometer based resonance tests for obtaining high frequency data. For an ideal material, data can be fitted to Williams-Landel-Ferry (WLF) model. A new temperature-time shift (TTS) macro was written to compare idealized temperature shift factor (WLF model) with experimental incremental shift factors. Deviations can be observed by comparison of experimental data with the model fit to determine the actual material behavior. Similarly, another macro written for obtaining Ogden model parameter from Hopkinson Bar tests indicates deviations from experimental high strain rate data. In this paper, experimental results for different materials used for mitigating impact, and ways to combine data from resonance, DMA and Hopkinson bar together with modeling refinements will be presented.
Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak
2016-01-01
The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....
NASA Astrophysics Data System (ADS)
Kusumo, B. H.; Sukartono, S.; Bustan, B.
2018-02-01
Measuring soil organic carbon (C) using conventional analysis is tedious procedure, time consuming and expensive. It is needed simple procedure which is cheap and saves time. Near infrared technology offers rapid procedure as it works based on the soil spectral reflectance and without any chemicals. The aim of this research is to test whether this technology able to rapidly measure soil organic C in rice paddy field. Soil samples were collected from rice paddy field of Lombok Island Indonesia, and the coordinates of the samples were recorded. Parts of the samples were analysed using conventional analysis (Walkley and Black) and some other parts were scanned using near infrared spectroscopy (NIRS) for soil spectral collection. Partial Least Square Regression (PLSR) Models were developed using data of soil C analysed using conventional analysis and data from soil spectral reflectance. The models were moderately successful to measure soil C in rice paddy field of Lombok Island. This shows that the NIR technology can be further used to monitor the C change in rice paddy soil.
An investigation on the fuel savings potential of hybrid hydraulic refuse collection vehicles.
Bender, Frank A; Bosse, Thomas; Sawodny, Oliver
2014-09-01
Refuse trucks play an important role in the waste collection process. Due to their typical driving cycle, these vehicles are characterized by large fuel consumption, which strongly affects the overall waste disposal costs. Hybrid hydraulic refuse vehicles offer an interesting alternative to conventional diesel trucks, because they are able to recuperate, store and reuse braking energy. However, the expected fuel savings can vary strongly depending on the driving cycle and the operational mode. Therefore, in order to assess the possible fuel savings, a typical driving cycle was measured in a conventional vehicle run by the waste authority of the City of Stuttgart, and a dynamical model of the considered vehicle was built up. Based on the measured driving cycle and the vehicle model including the hybrid powertrain components, simulations for both the conventional and the hybrid vehicle were performed. Fuel consumption results that indicate savings of about 20% are presented and analyzed in order to evaluate the benefit of hybrid hydraulic vehicles used for refuse collection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lightweighting Impacts on Fuel Economy, Cost, and Component Losses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooker, A. D.; Ward, J.; Wang, L.
2013-01-01
The Future Automotive Systems Technology Simulator (FASTSim) is the U.S. Department of Energy's high-level vehicle powertrain model developed at the National Renewable Energy Laboratory. It uses a time versus speed drive cycle to estimate the powertrain forces required to meet the cycle. It simulates the major vehicle powertrain components and their losses. It includes a cost model based on component sizing and fuel prices. FASTSim simulated different levels of lightweighting for four different powertrains: a conventional gasoline engine vehicle, a hybrid electric vehicle (HEV), a plug-in hybrid electric vehicle (PHEV), and a battery electric vehicle (EV). Weight reductions impacted themore » conventional vehicle's efficiency more than the HEV, PHEV and EV. Although lightweighting impacted the advanced vehicles' efficiency less, it reduced component cost and overall costs more. The PHEV and EV are less cost effective than the conventional vehicle and HEV using current battery costs. Assuming the DOE's battery cost target of $100/kWh, however, the PHEV attained similar cost and lightweighting benefits. Generally, lightweighting was cost effective when it costs less than $6/kg of mass eliminated.« less
Maccarini, Alessandro; Wetter, Michael; Afshari, Alireza; ...
2016-10-31
This paper analyzes the performance of a novel two-pipe system that operates one water loop to simultaneously provide space heating and cooling with a water supply temperature of around 22 °C. To analyze the energy performance of the system, a simulation-based research was conducted. The two-pipe system was modelled using the equation-based Modelica modeling language in Dymola. A typical office building model was considered as the case study. Simulations were run for two construction sets of the building envelope and two conditions related to inter-zone air flows. To calculate energy savings, a conventional four-pipe system was modelled and used formore » comparison. The conventional system presented two separated water loops for heating and cooling with supply temperatures of 45 °C and 14 °C, respectively. Simulation results showed that the two-pipe system was able to use less energy than the four-pipe system thanks to three effects: useful heat transfer from warm to cold zones, higher free cooling potential and higher efficiency of the heat pump. In particular, the two-pipe system used approximately between 12% and 18% less total annual primary energy than the four-pipe system, depending on the simulation case considered.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccarini, Alessandro; Wetter, Michael; Afshari, Alireza
This paper analyzes the performance of a novel two-pipe system that operates one water loop to simultaneously provide space heating and cooling with a water supply temperature of around 22 °C. To analyze the energy performance of the system, a simulation-based research was conducted. The two-pipe system was modelled using the equation-based Modelica modeling language in Dymola. A typical office building model was considered as the case study. Simulations were run for two construction sets of the building envelope and two conditions related to inter-zone air flows. To calculate energy savings, a conventional four-pipe system was modelled and used formore » comparison. The conventional system presented two separated water loops for heating and cooling with supply temperatures of 45 °C and 14 °C, respectively. Simulation results showed that the two-pipe system was able to use less energy than the four-pipe system thanks to three effects: useful heat transfer from warm to cold zones, higher free cooling potential and higher efficiency of the heat pump. In particular, the two-pipe system used approximately between 12% and 18% less total annual primary energy than the four-pipe system, depending on the simulation case considered.« less
NASA Astrophysics Data System (ADS)
Gou, Jun; Lee, Anson; Pyko, Jan
2014-10-01
The cranking and charging processes of a VRLA battery during stop-start cycling in micro-hybrid applications were simulated by one dimensional mathematical modeling, to study the formation and distribution of lead sulfate across the cell and analyze the resulting effect on battery aging. The battery focused on in this study represents a conventional VRLA battery without any carbon additives in the electrodes or carbon-based electrodes. The modeling results were validated against experimental data and used to analyze the "sulfation" of negative electrodes - the common failure mode of lead acid batteries under high-rate partial state of charge (HRPSoC) cycling. The analyses were based on two aging mechanisms proposed in previous studies and the predictions showed consistency with the previous teardown observations that the sulfate formed at the negative interface is more difficult to be converted back than anywhere else in the electrodes. The impact of cranking pulses during stop-start cycling on current density and the corresponding sulfate layer production was estimated. The effects of some critical design parameters on sulfate formation, distribution and aging over cycling were investigated, which provided guidelines for developing models and designing of VRLA batteries in micro-hybrid applications.
Cryogenic adsorption of nitrogen on activated carbon: Experiment and modeling
NASA Astrophysics Data System (ADS)
Zou, Long-Hui; Liu, Hui-Ming; Gong, Ling-Hui
2018-03-01
A cryo-sorption device was built based on a commercial gas sorption analyzer with its sample chamber connected to the 2nd stage of the Gifford-McMahon (GM) cryocooler (by SUMITOMO Corporation), which could provide the operation temperature ranging from 4.5 K to 300 K; The nitrogen adsorption isotherms ranging from 95 to 160 K were obtained by volumetric method on the PICATIF activated carbon. Isosteric heat of adsorption was calculated using the Clausius-Clapeyron equation and was around 8 kJ/mol. Conventional isotherm models and the artificial neural network (ANN) were applied to analyze the adsorption data, the Dual-site Langmuir and the Toth equation turned out to be the most suitable empirical isotherm model; Adsorption equilibrium data at some temperature was used to train the neural network and the rest was used to validate and predict, it turned out that the accuracy of the prediction by the ANN increased with increasing hidden-layer, and it was within ±5% for the three-hidden-layer ANN, and it showed better performance than the conventional isotherm model; Considering large time consumption and complexity of the adsorption experiment, the ANN method can be applied to get more adsorption data based on the already known experimental data.
Energy Optimization for a Weak Hybrid Power System of an Automobile Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fang, Wei; Quan, Shuhai; Xie, Changjun; Tang, Xinfeng; Ran, Bin; Jiao, Yatian
2017-11-01
An integrated starter generator (ISG)-type hybrid electric vehicle (HEV) scheme is proposed based on the automobile exhaust thermoelectric generator (AETEG). An eddy current dynamometer is used to simulate the vehicle's dynamic cycle. A weak ISG hybrid bench test system is constructed to test the 48 V output from the power supply system, which is based on engine exhaust-based heat power generation. The thermoelectric power generation-based system must ultimately be tested when integrated into the ISG weak hybrid mixed power system. The test process is divided into two steps: comprehensive simulation and vehicle-based testing. The system's dynamic process is simulated for both conventional and thermoelectric powers, and the dynamic running process comprises four stages: starting, acceleration, cruising and braking. The quantity of fuel available and battery pack energy, which are used as target vehicle energy functions for comparison with conventional systems, are simplified into a single energy target function, and the battery pack's output current is used as the control variable in the thermoelectric hybrid energy optimization model. The system's optimal battery pack output current function is resolved when its dynamic operating process is considered as part of the hybrid thermoelectric power generation system. In the experiments, the system bench is tested using conventional power and hybrid thermoelectric power for the four dynamic operation stages. The optimal battery pack curve is calculated by functional analysis. In the vehicle, a power control unit is used to control the battery pack's output current and minimize energy consumption. Data analysis shows that the fuel economy of the hybrid power system under European Driving Cycle conditions is improved by 14.7% when compared with conventional systems.
Hydrodynamic Characteristics of a Low-drag, Planing-tail Flying-boat Hull
NASA Technical Reports Server (NTRS)
Suydam, Henry B
1948-01-01
The hydrodynamic characteristics of a flying-boat incorporating a low-drag, planing-tail hull were determined from model tests made in Langley tank number 2 and compared with tests of the same flying boat incorporating a conventional-type hull. The planing-tail model, with which stable take-offs were possible for a large range of elevator positions at all center-of-gravity locations tested, had more take-off stability than the conventional model. No upper-limit porpoising was encountered by the planing-tail model. The maximum changes in rise during landings were lower for the planing-tail model than for the conventional model at most contact trims, an indication of improved landing stability for the planing-tail model. The hydrodynamic resistance of the planing-tail hull was lower than the conventional hull at all speeds, and the load-resistance ratio was higher for the planing-tail hull, being especially high at the hump. The static trim of the planing-tail hull was much higher than the conventional hull, but the variation of trim with speed during take-off was smaller.
DOT National Transportation Integrated Search
2001-06-30
Freight movements within large metropolitan areas are much less studied and analyzed than personal travel. This casts doubt on the results of much conventional travel demand modeling and planning. With so much traffic overlooked, how plausible are th...
Language Loss and the Crisis of Cognition: Between Socio- and Psycholinguistics.
ERIC Educational Resources Information Center
Kenny, K. Dallas
A non-structural model is proposed for quantifying and analyzing the dynamics of language attrition, particularly among immigrants in a second language environment, based on examination of disfluencies (hesitations, errors, and repairs). The first chapter discusses limitations of the conventional synchronic textual approach to analyzing language…
Woodward, Alexander; Froese, Tom; Ikegami, Takashi
2015-02-01
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wang, Xin; Wu, Linhui; Yi, Xi; Zhang, Yanqi; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2015-01-01
Due to both the physiological and morphological differences in the vascularization between healthy and diseased tissues, pharmacokinetic diffuse fluorescence tomography (DFT) can provide contrast-enhanced and comprehensive information for tumor diagnosis and staging. In this regime, the extended Kalman filtering (EKF) based method shows numerous advantages including accurate modeling, online estimation of multiparameters, and universal applicability to any optical fluorophore. Nevertheless the performance of the conventional EKF highly hinges on the exact and inaccessible prior knowledge about the initial values. To address the above issues, an adaptive-EKF scheme is proposed based on a two-compartmental model for the enhancement, which utilizes a variable forgetting-factor to compensate the inaccuracy of the initial states and emphasize the effect of the current data. It is demonstrated using two-dimensional simulative investigations on a circular domain that the proposed adaptive-EKF can obtain preferable estimation of the pharmacokinetic-rates to the conventional-EKF and the enhanced-EKF in terms of quantitativeness, noise robustness, and initialization independence. Further three-dimensional numerical experiments on a digital mouse model validate the efficacy of the method as applied in realistic biological systems.
Soliton communication lines based on spectrally efficient modulation formats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yushko, O V; Redyuk, A A
2014-06-30
We report the results of mathematical modelling of optical-signal propagation in soliton fibre-optic communication lines (FOCLs) based on spectrally efficient signal modulation formats. We have studied the influence of spontaneous emission noise, nonlinear distortions and FOCL length on the data transmission quality. We have compared the characteristics of a received optical signal for soliton and conventional dispersion compensating FOCLs. It is shown that in the presence of strong nonlinearity long-haul soliton FOCLs provide a higher data transmission performance, as well as allow higher order modulation formats to be used as compared to conventional communication lines. In the context of amore » coherent data transmission, soliton FOCLs allow the use of phase modulation with many levels, thereby increasing the spectral efficiency of the communication line. (optical communication lines)« less
Ooi, Soo Liang; McMullen, Debbie; Golombick, Terry; Nut, Dipl; Pak, Sok Cheon
2018-06-01
Conventional cancer treatment, including surgery, chemotherapy, and radiotherapy, may not be sufficient to eradicate all malignant cells and prevent recurrence. Intensive treatment often leads to a depressed immune system, drug resistance, and toxicity, hampering the treatment outcomes. BioBran/MGN-3 Arabinoxylan is a standardized arabinoxylan concentrate which has been proposed as a plant-based immunomodulator that can restore the tumor-induced disturbance of the natural immune system, including natural killer cell activity to fight cancer, complementing conventional therapies. To comprehensively review the available evidence on the effects and efficacies of MGN-3 as a complementary therapy for conventional cancer treatment. Systematic search of journal databases and gray literature for primary studies reporting the effects of MGN-3 on cancer and cancer treatment. Thirty full-text articles and 2 conference abstracts were included in this review. MGN-3 has been shown to possess immunomodulating anticancer effects and can work synergistically with chemotherapeutic agents, in vitro. In murine models, MGN-3 has been shown to act against carcinogenic agents, and inhibit tumor growth, either by itself or in combination with other anticancer compounds. Fourteen successful MGN-3 treated clinical cases were found. Eleven clinical studies, including 5 nonrandomized, pre-post intervention studies and 6 randomized controlled trials (RCTs) were located. Reported effects include enhanced immunoprofile, reduced side effects, improved treatment outcomes; one RCT established significantly increased survival rates. There are no reports on adverse events on MGN-3. Most of the clinical trials are small studies with short duration. There is sufficient evidence suggesting MGN-3 to be an effective immunomodulator that can complement conventional cancer treatment. However, more well-designed RCTs on MGN-3 are needed to strengthen the evidence base.
Full waveform inversion in the frequency domain using classified time-domain residual wavefields
NASA Astrophysics Data System (ADS)
Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan
2017-04-01
We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
NASA Astrophysics Data System (ADS)
Nisa, I. M.
2018-04-01
The ability of mathematical communication is one of the goals of learning mathematics expected to be mastered by students. However, reality in the field found that the ability of mathematical communication the students of grade XI IPA SMA Negeri 14 Padang have not developed optimally. This is evident from the low test results of communication skills mathematically done. One of the factors that causes this happens is learning that has not been fully able to facilitate students to develop mathematical communication skills well. By therefore, to improve students' mathematical communication skills required a model in the learning activities. One of the models learning that can be used is Problem Based learning model Learning (PBL). The purpose of this study is to see whether the ability the students' mathematical communication using the PBL model better than the students' mathematical communication skills of the learning using conventional learning in Class XI IPA SMAN 14 Padang. This research type is quasi experiment with design Randomized Group Only Design. Population in this research that is student of class XI IPA SMAN 14 Padang with sample class XI IPA 3 and class XI IPA 4. Data retrieval is done by using communication skill test mathematically shaped essay. To test the hypothesis used U-Mann test Whitney. Based on the results of data analysis, it can be concluded that the ability mathematical communication of students whose learning apply more PBL model better than the students' mathematical communication skills of their learning apply conventional learning in class XI IPA SMA 14 Padang at α = 0.05. This indicates that the PBL learning model effect on students' mathematical communication ability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
Simulation of large-scale rule-based models
Colvin, Joshua; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.; Von Hoff, Daniel D.; Posner, Richard G.
2009-01-01
Motivation: Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. Results: DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein–protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of StochSim. DYNSTOC differs from StochSim by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. Availability: DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at http://public.tgen.org/dynstoc/. Contact: dynstoc@tgen.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19213740
NASA Astrophysics Data System (ADS)
Hu, Kun; Zhu, Qi-zhi; Chen, Liang; Shao, Jian-fu; Liu, Jian
2018-06-01
As confining pressure increases, crystalline rocks of moderate porosity usually undergo a transition in failure mode from localized brittle fracture to diffused damage and ductile failure. This transition has been widely reported experimentally for several decades; however, satisfactory modeling is still lacking. The present paper aims at modeling the brittle-ductile transition process of rocks under conventional triaxial compression. Based on quantitative analyses of experimental results, it is found that there is a quite satisfactory linearity between the axial inelastic strain at failure and the confining pressure prescribed. A micromechanics-based frictional damage model is then formulated using an associated plastic flow rule and a strain energy release rate-based damage criterion. The analytical solution to the strong plasticity-damage coupling problem is provided and applied to simulate the nonlinear mechanical behaviors of Tennessee marble, Indiana limestone and Jinping marble, each presenting a brittle-ductile transition in stress-strain curves.
Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.
Griffith, Daniel A; Peres-Neto, Pedro R
2006-10-01
Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.
Charpentier, R.R.; Klett, T.R.
2005-01-01
During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.
Mesoscopic Model — Advanced Simulation of Microforming Processes
NASA Astrophysics Data System (ADS)
Geißdörfer, Stefan; Engel, Ulf; Geiger, Manfred
2007-04-01
Continued miniaturization in many fields of forming technology implies the need for a better understanding of the effects occurring while scaling down from conventional macroscopic scale to microscale. At microscale, the material can no longer be regarded as a homogeneous continuum because of the presence of only a few grains in the deformation zone. This leads to a change in the material behaviour resulting among others in a large scatter of forming results. A correlation between the integral flow stress of the workpiece and the scatter of the process factors on the one hand and the mean grain size and its standard deviation on the other hand has been observed in experiments. The conventional FE-simulation of scaled down processes is not able to consider the size-effects observed such as the actual reduction of the flow stress, the increasing scatter of the process factors and a local material flow being different to that obtained in the case of macroparts. For that reason, a new simulation model has been developed taking into account all the size-effects. The present paper deals with the theoretical background of the new mesoscopic model, its characteristics like synthetic grain structure generation and the calculation of micro material properties — based on conventional material properties. The verification of the simulation model is done by carrying out various experiments with different mean grain sizes and grain structures but the same geometrical dimensions of the workpiece.
Development of a GCR Event-based Risk Model
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee
2009-01-01
A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how GCR event rates mapped to biological signaling induction and relaxation times. We considered several hypotheses related to signaling and cancer risk, and then performed simulations for conditions where aberrant or adaptive signaling would occur on long-duration space mission. Our results do not support the conventional assumptions of dose, linearity and additivity. A discussion on how event-based systems biology models, which focus on biological signaling as the mechanism to propagate damage or adaptation, can be further developed for cancer and CNS space radiation risk projections is given.
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Analogue based design of MMP-13 (Collagenase-3) inhibitors.
Sarma, J A R P; Rambabu, G; Srikanth, K; Raveendra, D; Vithal, M
2002-10-07
3D-QSAR studies using MFA and RSA methods were performed on a series of 39MMP-13 inhibitors. Model developed by MFA method has a r(2)(cv) (cross-validated) of 0.616 while its r(2) (conventional) value is 0.822. For the RSA model r(2)(cv) and r(2) are 0.681 and 0.847, respectively. Both the models indicate good internal as well as external predictive abilities. These models provide crucial information about the field descriptors for the design of potential inhibitors of MMP-13.
Mechanical and Thermal Analysis of Classical Functionally Graded Coated Beam
NASA Astrophysics Data System (ADS)
Toudehdehghan, Abdolreza; Mujibur Rahman, Md.; Tarlochan, Faris
2018-03-01
The governing equation of a classical rectangular coated beam made of two layers subjected to thermal and uniformly distributed mechanical loads are derived by using the principle of virtual displacements and based on Euler-Bernoulli deformation beam theory (EBT). The aim of this paper was to analyze the static behavior of clamped-clamped thin coated beam under thermo-mechanical load using MATLAB. Two models were considered for composite coated. The first model was consisting of ceramic layer as a coated and substrate which was metal (HC model). The second model was consisting of Functionally Graded Material (FGM) as a coated layer and metal substrate (FGC model). From the result it was apparent that the superiority of the FGC composite against conventional coated composite has been demonstrated. From the analysis, the stress level throughout the thickness at the interface of the coated beam for the FGC was reduced. Yet, the deflection in return was observed to increase. Therefore, this could cater to various new engineering applications where warrant the utilization of material that has properties that are well-beyond the capabilities of the conventional or yesteryears materials.
An Emotional ANN (EANN) approach to modeling rainfall-runoff process
NASA Astrophysics Data System (ADS)
Nourani, Vahid
2017-01-01
This paper presents the first hydrological implementation of Emotional Artificial Neural Network (EANN), as a new generation of Artificial Intelligence-based models for daily rainfall-runoff (r-r) modeling of the watersheds. Inspired by neurophysiological form of brain, in addition to conventional weights and bias, an EANN includes simulated emotional parameters aimed at improving the network learning process. EANN trained by a modified version of back-propagation (BP) algorithm was applied to single and multi-step-ahead runoff forecasting of two watersheds with two distinct climatic conditions. Also to evaluate the ability of EANN trained by smaller training data set, three data division strategies with different number of training samples were considered for the training purpose. The overall comparison of the obtained results of the r-r modeling indicates that the EANN could outperform the conventional feed forward neural network (FFNN) model up to 13% and 34% in terms of training and verification efficiency criteria, respectively. The superiority of EANN over classic ANN is due to its ability to recognize and distinguish dry (rainless days) and wet (rainy days) situations using hormonal parameters of the artificial emotional system.
Mapping urban environmental noise: a land use regression method.
Xie, Dan; Liu, Yi; Chen, Jining
2011-09-01
Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.
Ardoino, Ilaria; Lanzoni, Monica; Marano, Giuseppe; Boracchi, Patrizia; Sagrini, Elisabetta; Gianstefani, Alice; Piscaglia, Fabio; Biganzoli, Elia M
2017-04-01
The interpretation of regression models results can often benefit from the generation of nomograms, 'user friendly' graphical devices especially useful for assisting the decision-making processes. However, in the case of multinomial regression models, whenever categorical responses with more than two classes are involved, nomograms cannot be drawn in the conventional way. Such a difficulty in managing and interpreting the outcome could often result in a limitation of the use of multinomial regression in decision-making support. In the present paper, we illustrate the derivation of a non-conventional nomogram for multinomial regression models, intended to overcome this issue. Although it may appear less straightforward at first sight, the proposed methodology allows an easy interpretation of the results of multinomial regression models and makes them more accessible for clinicians and general practitioners too. Development of prediction model based on multinomial logistic regression and of the pertinent graphical tool is illustrated by means of an example involving the prediction of the extent of liver fibrosis in hepatitis C patients by routinely available markers.
Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting
NASA Astrophysics Data System (ADS)
Nanzad, Bolorchimeg
This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
Economic benefits of safety-engineered sharp devices in Belgium - a budget impact model.
Hanmore, Emma; Maclaine, Grant; Garin, Fiona; Alonso, Alexander; Leroy, Nicolas; Ruff, Lewis
2013-11-25
Measures to protect healthcare workers where there is risk of injury or infection from medical sharps became mandatory in the European Union (EU) from May 2013. Our research objective was to estimate the net budget impact of introducing safety-engineered devices (SEDs) for prevention of needlestick injuries (NSIs) in a Belgian hospital. A 5-year incidence-based budget impact model was developed from the hospital inpatient perspective, comparing costs and outcomes with SEDs and prior-used conventional (non-safety) devices. The model accounts for device acquisition costs and costs of NSI management in 4 areas of application where SEDs are currently used: blood collection, infusion, injection and diabetes insulin administration. Model input data were sourced from the Institut National d'Assurance Maladie-Invalidité, published studies, clinical guidelines and market research. Costs are discounted at 3%. For a 420-bed hospital, 100% substitution of conventional devices by SEDs is estimated to decrease the cumulative 5-year incidence of NSIs from 310 to 75, and those associated with exposure to blood-borne viral diseases from 60 to 15. Cost savings from managing fewer NSIs more than offset increased device acquisition costs, yielding estimated 5-year overall savings of €51,710. The direction of these results is robust to a range of sensitivity and model scenario analyses. The model was most sensitive to variation in the acquisition costs of SEDs, rates of NSI associated with conventional devices, and the acquisition costs of conventional devices. NSIs are a significant potential risk with the use of sharp devices. The incidence of NSIs and the costs associated with their management can be reduced through the adoption of safer work practices, including investment in SEDs. For a Belgian hospital, the budget impact model reports that the incremental acquisition costs of SEDs are offset by the savings from fewer NSIs. The availability of more robust data for NSI reduction rates, and broadening the scope of the model to include ancillary measures for hospital conversion to SED usage, outpatient and paramedic device use, and transmission of other blood-borne diseases, would strengthen the model.
Inversion of Density Interfaces Using the Pseudo-Backpropagation Neural Network Method
NASA Astrophysics Data System (ADS)
Chen, Xiaohong; Du, Yukun; Liu, Zhan; Zhao, Wenju; Chen, Xiaocheng
2018-05-01
This paper presents a new pseudo-backpropagation (BP) neural network method that can invert multi-density interfaces at one time. The new method is based on the conventional forward modeling and inverse modeling theories in addition to conventional pseudo-BP neural network arithmetic. A 3D inversion model for gravity anomalies of multi-density interfaces using the pseudo-BP neural network method is constructed after analyzing the structure and function of the artificial neural network. The corresponding iterative inverse formula of the space field is presented at the same time. Based on trials of gravity anomalies and density noise, the influence of the two kinds of noise on the inverse result is discussed and the scale of noise requested for the stability of the arithmetic is analyzed. The effects of the initial model on the reduction of the ambiguity of the result and improvement of the precision of inversion are discussed. The correctness and validity of the method were verified by the 3D model of the three interfaces. 3D inversion was performed on the observed gravity anomaly data of the Okinawa trough using the program presented herein. The Tertiary basement and Moho depth were obtained from the inversion results, which also testify the adaptability of the method. This study has made a useful attempt for the inversion of gravity density interfaces.
Development of antibiotic regimens using graph based evolutionary algorithms.
Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M
2013-12-01
This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The architecture of dynamic reservoir in the echo state network
NASA Astrophysics Data System (ADS)
Cui, Hongyan; Liu, Xiang; Li, Lixiang
2012-09-01
Echo state network (ESN) has recently attracted increasing interests because of its superior capability in modeling nonlinear dynamic systems. In the conventional echo state network model, its dynamic reservoir (DR) has a random and sparse topology, which is far from the real biological neural networks from both structural and functional perspectives. We hereby propose three novel types of echo state networks with new dynamic reservoir topologies based on complex network theory, i.e., with a small-world topology, a scale-free topology, and a mixture of small-world and scale-free topologies, respectively. We then analyze the relationship between the dynamic reservoir structure and its prediction capability. We utilize two commonly used time series to evaluate the prediction performance of the three proposed echo state networks and compare them to the conventional model. We also use independent and identically distributed time series to analyze the short-term memory and prediction precision of these echo state networks. Furthermore, we study the ratio of scale-free topology and the small-world topology in the mixed-topology network, and examine its influence on the performance of the echo state networks. Our simulation results show that the proposed echo state network models have better prediction capabilities, a wider spectral radius, but retain almost the same short-term memory capacity as compared to the conventional echo state network model. We also find that the smaller the ratio of the scale-free topology over the small-world topology, the better the memory capacities.
NASA Astrophysics Data System (ADS)
Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.
2013-04-01
In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Seo, Eun-Young; An, Sook Hee; Cho, Jang-Hee; Suh, Hae Sun; Park, Sun-Hee; Gwak, Hyesun; Kim, Yong-Lim; Ha, Hunjoo
2014-01-01
♦ Introduction: Residual renal function (RRF) plays an important role in outcome of peritoneal dialysis (PD) including mortality. It is, therefore, important to provide a strategy for the preservation of RRF. The objective of this study was to evaluate relative protective effects of new glucose-based multicompartmental PD solution (PDS), which is well known to be more biocompatible than glucose-based conventional PDS, on RRF compared to conventional PDS by performing a systematic review (SR) of randomized controlled trials. ♦ Methods: We searched studies presented up to January 2014 in MEDLINE, EMBASE, the COCHRANE library, and local databases. Three independent reviewers reviewed and extracted prespecified data from each study. The random effects model, a more conservative analysis model, was used to combine trials and to perform stratified analyses based on the duration of follow-up. Study quality was assessed using the Cochrane Handbook for risk of bias. Eleven articles with 1,034 patients were identified for the SR. ♦ Results: The heterogeneity of the studies under 12 months was very high, and the heterogeneity decreased substantially when we stratified studies by the duration of follow-up. The mean difference of the studies after 12 months was 0.46 mL/min/1.73 m2 (95% confidence interval = 0.25 to + 0.67). ♦ Conclusion: New PDS showed the effect to preserve and improve RRF for long-term use compared to conventional PDS, even though it did not show a significant difference to preserve RRF for short-term use. PMID:25185015
Performance of Bootstrap MCEWMA: Study case of Sukuk Musyarakah data
NASA Astrophysics Data System (ADS)
Safiih, L. Muhamad; Hila, Z. Nurul
2014-07-01
Sukuk Musyarakah is one of several instruments of Islamic bond investment in Malaysia, where the form of this sukuk is actually based on restructuring the conventional bond to become a Syariah compliant bond. The Syariah compliant is based on prohibition of any influence of usury, benefit or fixed return. Despite of prohibition, daily returns of sukuk are non-fixed return and in statistic, the data of sukuk returns are said to be a time series data which is dependent and autocorrelation distributed. This kind of data is a crucial problem whether in statistical and financing field. Returns of sukuk can be statistically viewed by its volatility, whether it has high volatility that describing the dramatically change of price and categorized it as risky bond or else. However, this crucial problem doesn't get serious attention among researcher compared to conventional bond. In this study, MCEWMA chart in Statistical Process Control (SPC) is mainly used to monitor autocorrelated data and its application on daily returns of securities investment data has gained widespread attention among statistician. However, this chart has always been influence by inaccurate estimation, whether on base model or its limit, due to produce large error and high of probability of signalling out-of-control process for false alarm study. To overcome this problem, a bootstrap approach used in this study, by hybridise it on MCEWMA base model to construct a new chart, i.e. Bootstrap MCEWMA (BMCEWMA) chart. The hybrid model, BMCEWMA, will be applied to daily returns of sukuk Musyarakah for Rantau Abang Capital Bhd. The performance of BMCEWMA base model showed that its more effective compare to real model, MCEWMA based on smaller error estimation, shorter the confidence interval and smaller false alarm. In other word, hybrid chart reduce the variability which shown by smaller error and false alarm. It concludes that the application of BMCEWMA is better than MCEWMA.
Cost and cost effectiveness of long-lasting insecticide-treated bed nets - a model-based analysis
2012-01-01
Background The World Health Organization recommends that national malaria programmes universally distribute long-lasting insecticide-treated bed nets (LLINs). LLINs provide effective insecticide protection for at least three years while conventional nets must be retreated every 6-12 months. LLINs may also promise longer physical durability (lifespan), but at a higher unit price. No prospective data currently available is sufficient to calculate the comparative cost effectiveness of different net types. We thus constructed a model to explore the cost effectiveness of LLINs, asking how a longer lifespan affects the relative cost effectiveness of nets, and if, when and why LLINs might be preferred to conventional insecticide-treated nets. An innovation of our model is that we also considered the replenishment need i.e. loss of nets over time. Methods We modelled the choice of net over a 10-year period to facilitate the comparison of nets with different lifespan (and/or price) and replenishment need over time. Our base case represents a large-scale programme which achieves high coverage and usage throughout the population by distributing either LLINs or conventional nets through existing health services, and retreats a large proportion of conventional nets regularly at low cost. We identified the determinants of bed net programme cost effectiveness and parameter values for usage rate, delivery and retreatment cost from the literature. One-way sensitivity analysis was conducted to explicitly compare the differential effect of changing parameters such as price, lifespan, usage and replenishment need. Results If conventional and long-lasting bed nets have the same physical lifespan (3 years), LLINs are more cost effective unless they are priced at more than USD 1.5 above the price of conventional nets. Because a longer lifespan brings delivery cost savings, each one year increase in lifespan can be accompanied by a USD 1 or more increase in price without the cheaper net (of the same type) becoming more cost effective. Distributing replenishment nets each year in addition to the replacement of all nets every 3-4 years increases the number of under-5 deaths averted by 5-14% at a cost of USD 17-25 per additional person protected per annum or USD 1080-1610 per additional under-5 death averted. Conclusions Our results support the World Health Organization recommendation to distribute only LLINs, while giving guidance on the price thresholds above which this recommendation will no longer hold. Programme planners should be willing to pay a premium for nets which have a longer physical lifespan, and if planners are willing to pay USD 1600 per under-5 death averted, investing in replenishment is cost effective. PMID:22475679
A feature-based approach to modeling protein-protein interaction hot spots.
Cho, Kyu-il; Kim, Dongsup; Lee, Doheon
2009-05-01
Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.
NASA Astrophysics Data System (ADS)
Ahn, Junkeon; Noh, Yeelyong; Park, Sung Ho; Choi, Byung Il; Chang, Daejun
2017-10-01
This study proposes a fuzzy-based FMEA (failure mode and effect analysis) for a hybrid molten carbonate fuel cell and gas turbine system for liquefied hydrogen tankers. An FMEA-based regulatory framework is adopted to analyze the non-conventional propulsion system and to understand the risk picture of the system. Since the participants of the FMEA rely on their subjective and qualitative experiences, the conventional FMEA used for identifying failures that affect system performance inevitably involves inherent uncertainties. A fuzzy-based FMEA is introduced to express such uncertainties appropriately and to provide flexible access to a risk picture for a new system using fuzzy modeling. The hybrid system has 35 components and has 70 potential failure modes, respectively. Significant failure modes occur in the fuel cell stack and rotary machine. The fuzzy risk priority number is used to validate the crisp risk priority number in the FMEA.
Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.
2011-01-01
Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.
Modeling in conventional and supra electroporation for model cell with organelles
NASA Astrophysics Data System (ADS)
Sulaeman, Muhammad Yangki; Widita, Rena
2015-09-01
Electroporation is a formation of pores in the membrane cell due to the external electric field applied to the cell. There are two types of electroporation, conventional and supra-electroporation. The purpose of creating pores in the cell using conventional electroporation are to increase the effectiveness of chemotherapy (electrochemotherapy) and to kill cancer tissue using irreversible electroporation. Supra-electroporation shows that it can induce electroporation in the organell inside the cell, so it can kill the cell by apoptosis mechanism. Modeling of electroporation phenomenon on a model cell had been done by using software COMSOL Multiphysics 4.3b with the applied external electric field used are 1.1 kV/cm for conventional electroporation and 60 kV/cm for supra-electroporation to find the difference between transmembrane voltage and pore density for both electroporation. It can be concluded from the results that there is a big difference between transmembrane voltage and pores density on conventional and supra electroporation on model cell.
Search-based model identification of smart-structure damage
NASA Technical Reports Server (NTRS)
Glass, B. J.; Macalou, A.
1991-01-01
This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.
Mi, Jianing; Zhang, Min; Zhang, Hongyang; Wang, Yuerong; Wu, Shikun; Hu, Ping
2013-02-01
A high-efficient and environmental-friendly method for the preparation of ginsenosides from Radix Ginseng using the method of coupling of ultrasound-assisted extraction with expanded bed adsorption is described. Based on the optimal extraction conditions screened by surface response methodology, ginsenosides were extracted and adsorbed, then eluted by the two-step elution protocol. The comparison results between the coupling of ultrasound-assisted extraction with expanded bed adsorption method and conventional method showed that the former was better than the latter in both process efficiency and greenness. The process efficiency and energy efficiency of the coupling of ultrasound-assisted extraction with expanded bed adsorption method rapidly increased by 1.4-fold and 18.5-fold of the conventional method, while the environmental cost and CO(2) emission of the conventional method were 12.9-fold and 17.0-fold of the new method. Furthermore, the theoretical model for the extraction of targets was derived. The results revealed that the theoretical model suitably described the process of preparing ginsenosides by the coupling of ultrasound-assisted extraction with expanded bed adsorption system. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cost-effectiveness analysis of microdose clinical trials in drug development.
Yamane, Naoe; Igarashi, Ataru; Kusama, Makiko; Maeda, Kazuya; Ikeda, Toshihiko; Sugiyama, Yuichi
2013-01-01
Microdose (MD) clinical trials have been introduced to obtain human pharmacokinetic data early in drug development. Here we assessed the cost-effectiveness of microdose integrated drug development in a hypothetical model, as there was no such quantitative research that weighed the additional effectiveness against the additional time and/or cost. First, we calculated the cost and effectiveness (i.e., success rate) of 3 types of MD integrated drug development strategies: liquid chromatography-tandem mass spectrometry, accelerator mass spectrometry, and positron emission tomography. Then, we analyzed the cost-effectiveness of 9 hypothetical scenarios where 100 drug candidates entering into a non-clinical toxicity study were selected by different methods as the conventional scenario without MD. In the base-case, where 70 drug candidates were selected without MD and 30 selected evenly by one of the three MD methods, incremental cost-effectiveness ratio per one additional drug approved was JPY 12.7 billion (US$ 0.159 billion), whereas the average cost-effectiveness ratio of the conventional strategy was JPY 24.4 billion, which we set as a threshold. Integrating MD in the conventional drug development was cost-effective in this model. This quantitative analytical model which allows various modifications according to each company's conditions, would be helpful for guiding decisions early in clinical development.
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems
NASA Astrophysics Data System (ADS)
Yang, Le; Wang, Shuo; Feng, Jianghua
2017-11-01
Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
NASA Astrophysics Data System (ADS)
Jitsuhiro, Takatoshi; Toriyama, Tomoji; Kogure, Kiyoshi
We propose a noise suppression method based on multi-model compositions and multi-pass search. In real environments, input speech for speech recognition includes many kinds of noise signals. To obtain good recognized candidates, suppressing many kinds of noise signals at once and finding target speech is important. Before noise suppression, to find speech and noise label sequences, we introduce multi-pass search with acoustic models including many kinds of noise models and their compositions, their n-gram models, and their lexicon. Noise suppression is frame-synchronously performed using the multiple models selected by recognized label sequences with time alignments. We evaluated this method using the E-Nightingale task, which contains voice memoranda spoken by nurses during actual work at hospitals. The proposed method obtained higher performance than the conventional method.
A shorter and more specific oral sensitization-based experimental model of food allergy in mice.
Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica
2012-07-31
Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.
A survey of hybrid Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Saeed, Adnan S.; Younes, Ahmad Bani; Cai, Chenxiao; Cai, Guowei
2018-04-01
This article presents a comprehensive overview on the recent advances of miniature hybrid Unmanned Aerial Vehicles (UAVs). For now, two conventional types, i.e., fixed-wing UAV and Vertical Takeoff and Landing (VTOL) UAV, dominate the miniature UAVs. Each type has its own inherent limitations on flexibility, payload, flight range, cruising speed, takeoff and landing requirements and endurance. Enhanced popularity and interest are recently gained by the newer type, named hybrid UAV, that integrates the beneficial features of both conventional ones. In this survey paper, a systematic categorization method for the hybrid UAV's platform designs is introduced, first presenting the technical features and representative examples. Next, the hybrid UAV's flight dynamics model and flight control strategies are explained addressing several representative modeling and control work. In addition, key observations, existing challenges and conclusive remarks based on the conducted review are discussed accordingly.
Nakamura, Keita; Kikumoto, Mamoru
2018-07-01
The Leverett concept is used conventionally to model the relationship between the capillary pressures and the degrees of saturation in the water-nonaqueous phase liquid (NAPL)-air three-phase system in porous media. In this paper, the limitation of the Leverett concept that the concept is not applicable in the case of nonspreading NAPLs is discussed through microscopic consideration. A new concept that can be applied in the case of nonspreading NAPLs as well as spreading NAPLs is then proposed. The validity of the proposed concept is confirmed by comparing with past experimental data and simulation results obtained using the conventional model based on the Leverett concept. It is confirmed that the proposed concept can correctly predict the observed distributions of NAPLs, including those of nonspreading ones. Copyright © 2018. Published by Elsevier B.V.
Pediatric Headache Clinic Model: Implementation of Integrative Therapies in Practice.
Esparham, Anna; Herbert, Anne; Pierzchalski, Emily; Tran, Catherine; Dilts, Jennifer; Boorigie, Madeline; Wingert, Tammie; Connelly, Mark; Bickel, Jennifer
2018-06-12
The demand for integrative medicine has risen in recent years as research has demonstrated the efficacy of such treatments. The public has also become more conscientious of the potential limitations of conventional treatment alone. Because primary headache syndromes are often the culmination of genetics, lifestyle, stress, trauma, and environmental factors, they are best treated with therapies that are equally multifaceted. The Children’s Mercy Hospital, Kansas City, Missouri Headache Clinic has successfully incorporated integrative therapies including nutraceuticals, acupuncture, aromatherapy, biofeedback, relaxation training, hypnosis, psychology services, and lifestyle recommendations for headache management. This paper provides a detailed review of the implementation of integrative therapies for headache treatment and discusses examples through case studies. It can serve as a model for other specialty settings intending to incorporate all evidenced-based practices, whether complementary or conventional.
Capacity expansion model of wind power generation based on ELCC
NASA Astrophysics Data System (ADS)
Yuan, Bo; Zong, Jin; Wu, Shengyu
2018-02-01
Capacity expansion is an indispensable prerequisite for power system planning and construction. A reasonable, efficient and accurate capacity expansion model (CEM) is crucial to power system planning. In most current CEMs, the capacity of wind power generation is considered as boundary conditions instead of decision variables, which may lead to curtailment or over construction of flexible resource, especially at a high renewable energy penetration scenario. This paper proposed a wind power generation capacity value(CV) calculation method based on effective load-carrying capability, and a CEM that co-optimizes wind power generation and conventional power sources. Wind power generation is considered as decision variable in this model, and the model can accurately reflect the uncertainty nature of wind power.
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Senna, Kátia Marie Simões e.; Tura, Bernardo Rangel; Goulart, Marcelo Correia
2014-01-01
Objectives The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. Methods A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. Results The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. Conclusions The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation. PMID:25302806
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Simões e Senna, Kátia Marie; Tura, Bernardo Rangel; Correia, Marcelo Goulart; Goulart, Marcelo Correia
2014-01-01
The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation.
Dopant Segregation in Earth- and Space-Grown InP Crystals
NASA Astrophysics Data System (ADS)
Danilewsky, Andreas Nikolaus; Okamoto, Yusuke; Benz, Klaus Werner; Nishinaga, Tatau
1992-07-01
Macro- and microsegregation of sulphur in InP crystals grown from In solution by the travelling heater method under microgravity and normal gravity are analyzed using spatially resolved photoluminescence. Whereas the macrosegregation in earth- as well as space-grown crystals is explained by conventional steady-state models based on the theory of Burton, Prim and Slichter (BPS), the microsegregation can only be understood in terms of the non-steady-state step exchange model.
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process
NASA Astrophysics Data System (ADS)
Nguyen, ThanhDat; Kifor, Claudiu Vasile
2015-09-01
DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.
Crack identification for rigid pavements using unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bahaddin Ersoz, Ahmet; Pekcan, Onur; Teke, Turker
2017-09-01
Pavement condition assessment is an essential piece of modern pavement management systems as rehabilitation strategies are planned based upon its outcomes. For proper evaluation of existing pavements, they must be continuously and effectively monitored using practical means. Conventionally, truck-based pavement monitoring systems have been in-use in assessing the remaining life of in-service pavements. Although such systems produce accurate results, their use can be expensive and data processing can be time consuming, which make them infeasible considering the demand for quick pavement evaluation. To overcome such problems, Unmanned Aerial Vehicles (UAVs) can be used as an alternative as they are relatively cheaper and easier-to-use. In this study, we propose a UAV based pavement crack identification system for monitoring rigid pavements’ existing conditions. The system consists of recently introduced image processing algorithms used together with conventional machine learning techniques, both of which are used to perform detection of cracks on rigid pavements’ surface and their classification. Through image processing, the distinct features of labelled crack bodies are first obtained from the UAV based images and then used for training of a Support Vector Machine (SVM) model. The performance of the developed SVM model was assessed with a field study performed along a rigid pavement exposed to low traffic and serious temperature changes. Available cracks were classified using the UAV based system and obtained results indicate it ensures a good alternative solution for pavement monitoring applications.
Johnson, Philip J.; Berhane, Sarah; Kagebayashi, Chiaki; Satomura, Shinji; Teng, Mabel; Reeves, Helen L.; O'Beirne, James; Fox, Richard; Skowronska, Anna; Palmer, Daniel; Yeo, Winnie; Mo, Frankie; Lai, Paul; Iñarrairaegui, Mercedes; Chan, Stephen L.; Sangro, Bruno; Miksad, Rebecca; Tada, Toshifumi; Kumada, Takashi; Toyoda, Hidenori
2015-01-01
Purpose Most patients with hepatocellular carcinoma (HCC) have associated chronic liver disease, the severity of which is currently assessed by the Child-Pugh (C-P) grade. In this international collaboration, we identify objective measures of liver function/dysfunction that independently influence survival in patients with HCC and then combine these into a model that could be compared with the conventional C-P grade. Patients and Methods We developed a simple model to assess liver function, based on 1,313 patients with HCC of all stages from Japan, that involved only serum bilirubin and albumin levels. We then tested the model using similar cohorts from other geographical regions (n = 5,097) and other clinical situations (patients undergoing resection [n = 525] or sorafenib treatment for advanced HCC [n = 1,132]). The specificity of the model for liver (dys)function was tested in patients with chronic liver disease but without HCC (n = 501). Results The model, the Albumin-Bilirubin (ALBI) grade, performed at least as well as the C-P grade in all geographic regions. The majority of patients with HCC had C-P grade A disease at presentation, and within this C-P grade, ALBI revealed two classes with clearly different prognoses. Its utility in patients with chronic liver disease alone supported the contention that the ALBI grade was indeed an index of liver (dys)function. Conclusion The ALBI grade offers a simple, evidence-based, objective, and discriminatory method of assessing liver function in HCC that has been extensively tested in an international setting. This new model eliminates the need for subjective variables such as ascites and encephalopathy, a requirement in the conventional C-P grade. PMID:25512453
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Geet, Otto D.; Fu, Ran; Horowitz, Kelsey A.
NREL studied a new type of photovoltaic (PV) module configuration wherein multiple narrow, tilted slats are mounted in a single frame. Each slat of the PV slat module contains a single row of cells and is made using ordinary crystalline silicon PV module materials and processes, including a glass front sheet and weatherproof polymer encapsulation. Compared to a conventional ballasted system, a system using slat modules offer higher energy production and lower weight at lower LCOE. The key benefits of slat modules are reduced wind loading, improved capacity factor and reduced installation cost. First, the individual slats allow air tomore » flow through, which reduce wind loading. Using PV performance modeling software, we compared the performance of an optimized installation of slats modules to a typical installation of conventional modules in a ballasted rack mounting system. Based on the results of the performance modeling two different row tilt and spacing were tested in a wind tunnel. Scaled models of the PV Slat modules were wind tunnel tested to quantify the wind loading of a slat module system on a commercial rooftop, comparing the results to conventional ballasted rack mounted PV modules. Some commercial roofs do not have sufficient reserve dead load capacity to accommodate a ballasted system. A reduced ballast system design could make PV system installation on these roofs feasible for the first time without accepting the disadvantages of penetrating mounts. Finally, technoeconomic analysis was conducted to enable an economic comparison between a conventional commercial rooftop system and a reduced-ballast slat module installation.« less
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
NASA Astrophysics Data System (ADS)
Mkoga, Z. J.; Tumbo, S. D.; Kihupi, N.; Semoka, J.
There is big effort to disseminate conservation tillage practices in Tanzania. Despite wide spread field demonstrations there has been some field experiments meant to assess and verify suitability of the tillage options in local areas. Much of the experiments are short lived and thus long term effects of the tillage options are unknown. Experiments to study long term effects of the tillage options are lacking because they are expensive and cannot be easily managed. Crop simulation models have the ability to use long term weather data and the local soil parameters to assess long term effects of the tillage practices. The Agricultural Production Systems Simulator (APSIM) crop simulation model; was used to simulate long term production series of soil moisture and grain yield based on the soil and weather conditions in Mkoji sub-catchment of the great Ruaha river basin in Tanzania. A 24 year simulated maize yield series based on conventional tillage with ox-plough, without surface crop residues (CT) treatment was compared with similar yield series based on conservation tillage (ox-ripping, with surface crop residues (RR)). Results showed that predicted yield averages were significantly higher in conservation tillage than in conventional tillage ( P < 0.001). Long term analysis, using APSIM simulation model, showed that average soil moisture in the conservation tillage was significantly higher ( P < 0.05) (about 0.29 mm/mm) than in conventional tillage (0.22 mm/mm) treatment during the seasons which received rainfall between 468 and 770 mm. Similarly the conservation tillage treatment recorded significantly higher yields (4.4 t/ha) ( P < 0.01) than the conventional tillage (3.6 t/ha) treatment in the same range of seasonal rainfall. On the other hand there was no significant difference in soil moisture for the seasons which received rainfall above 770 mm. In these seasons grain yield in conservation tillage treatment was significantly lower (3.1 kg/ha) than in the conventional tillage treatment (4.8 kg/ha) ( P < 0.05). Results also indicated a probability of 0.5 of getting higher yield in conservation than in conventional tillage practice. The conservation tillage treatment had the ability to even-out the acute and long intra-seasonal dry spells. For example a 36-days agricultural dry spell which occurred between 85th and 130th day after planting in the 1989/1990 season (in the CT treatment) was mitigated to zero days in the RR treatment by maintaining soil moisture above the critical point. Critical soil moisture for maize was measured at 0.55 of maximum soil moisture that can be depleted crop (0.55 D). It is concluded that conservation tillage practice where ripping and surface crop residues is used is much more effective in mitigating dry spells and increase productivity in a seasonal rainfall range of between 460 and 770 mm. It is recommended that farmers in the area adopt that type of conservation tillage because rainfall was in this range (460-770 mm) in 12 out of the past 24 years, indicating possibility of yield losses once in every 2 years.
Revisiting the 'Low BirthWeight paradox' using a model-based definition.
Juárez, Sol; Ploubidis, George B; Clarke, Lynda
2014-01-01
Immigrant mothers in Spain have a lower risk of delivering Low BirthWeight (LBW) babies in comparison to Spaniards (LBW paradox). This study aimed at revisiting this finding by applying a model-based threshold as an alternative to the conventional definition of LBW. Vital information data from Madrid was used (2005-2006). LBW was defined in two ways (less than 2500g and Wilcox's proposal). Logistic and linear regression models were run. According to common definition of LBW (less than 2500g) there is evidence to support the LBW paradox in Spain. Nevertheless, when an alternative model-based definition of LBW is used, the paradox is only clearly present in mothers from the rest of Southern America, suggesting a possible methodological bias effect. In the future, any examination of the existence of the LBW paradox should incorporate model-based definitions of LBW in order to avoid methodological bias. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Verification and Validation of Autonomy Software at NASA
NASA Technical Reports Server (NTRS)
Pecheur, Charles
2000-01-01
Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.
Verification and Validation of Autonomy Software at NASA
NASA Technical Reports Server (NTRS)
Pecheur, Charles
2000-01-01
Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.
Modeling Collaborative Interaction Patterns in a Simulation-Based Task
ERIC Educational Resources Information Center
Andrews, Jessica J.; Kerr, Deirdre; Mislevy, Robert J.; von Davier, Alina; Hao, Jiangang; Liu, Lei
2017-01-01
Simulations and games offer interactive tasks that can elicit rich data, providing evidence of complex skills that are difficult to measure with more conventional items and tests. However, one notable challenge in using such technologies is making sense of the data generated in order to make claims about individuals or groups. This article…
Directed Student Inquiry: Modeling in Roborovsky Hamsters
ERIC Educational Resources Information Center
Elwess, Nancy L.; Bouchard, Adam
2007-01-01
In this inquiry-based activity, Roborovsky hamsters are used to provide students with an opportunity to develop their skills of analysis, inquiry, and design. These hamsters are easy to maintain, yet offer students a means to use conventional techniques and those of their own design to make further observations through measuring, assessing, and…
Let's Cancel the Dog-and-Pony Show
ERIC Educational Resources Information Center
Marshall, Kim
2012-01-01
Why are so many educators willing to give credence to observations based on announced visits? Perhaps it's avoidance or a failure to distinguish between good teachers and good teaching, or perhaps it's the way the conventional teacher-evaluation model limits administrators' options. To put it bluntly, an evaluation process that relies on announced…
ERIC Educational Resources Information Center
Lintao, Rachelle B.; Erfe, Jonathan P.
2012-01-01
This study purports to foster the understanding of profession-based academic writing in two different cultural conventions by examining the rhetorical moves employed by American and Philippine thesis introductions in Architecture using Swales' 2004 Revised CARS move-analytic model as framework. Twenty (20) Master's thesis introductions in…
DOT National Transportation Integrated Search
2010-10-01
Ultra-high performance concrete (UHPC) is an advanced cementitious composite material which has been developed in recent decades. When compared to more conventional cement-based concrete materials, UHPC tends to exhibit superior properties such as in...
Safeguards Technology Development Program 1st Quarter FY 2018 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Manoj K.
LLNL will evaluate the performance of a stilbene-based scintillation detector array for IAEA neutron multiplicity counting (NMC) applications. This effort will combine newly developed modeling methodologies and recently acquired high-efficiency stilbene detector units to quantitatively compare the prototype system performance with the conventional He-3 counters and liquid scintillator alternatives.
Robustness of Ability Estimation to Multidimensionality in CAST with Implications to Test Assembly
ERIC Educational Resources Information Center
Zhang, Yanwei; Nandakumar, Ratna
2006-01-01
Computer Adaptive Sequential Testing (CAST) is a test delivery model that combines features of the traditional conventional paper-and-pencil testing and item-based computerized adaptive testing (CAT). The basic structure of CAST is a panel composed of multiple testlets adaptively administered to examinees at different stages. Current applications…
Enculturating Seamless Language Learning through Artifact Creation and Social Interaction Process
ERIC Educational Resources Information Center
Wong, Lung-Hsiang; Chai, Ching Sing; Aw, Guat Poh; King, Ronnel B.
2015-01-01
This paper reports a design-based research (DBR) cycle of MyCLOUD (My Chinese ubiquitOUs learning Days). MyCLOUD is a seamless language learning model that addresses identified limitations of conventional Chinese language teaching, such as the decontextualized and unauthentic learning processes that usually hinder reflection and deep learning.…
NASA Astrophysics Data System (ADS)
Li, Shao-Xin; Zeng, Qiu-Yao; Li, Lin-Fang; Zhang, Yan-Jiao; Wan, Ming-Ming; Liu, Zhi-Ming; Xiong, Hong-Lian; Guo, Zhou-Yi; Liu, Song-Hao
2013-02-01
The ability of combining serum surface-enhanced Raman spectroscopy (SERS) with support vector machine (SVM) for improving classification esophageal cancer patients from normal volunteers is investigated. Two groups of serum SERS spectra based on silver nanoparticles (AgNPs) are obtained: one group from patients with pathologically confirmed esophageal cancer (n=30) and the other group from healthy volunteers (n=31). Principal components analysis (PCA), conventional SVM (C-SVM) and conventional SVM combination with PCA (PCA-SVM) methods are implemented to classify the same spectral dataset. Results show that a diagnostic accuracy of 77.0% is acquired for PCA technique, while diagnostic accuracies of 83.6% and 85.2% are obtained for C-SVM and PCA-SVM methods based on radial basis functions (RBF) models. The results prove that RBF SVM models are superior to PCA algorithm in classification serum SERS spectra. The study demonstrates that serum SERS in combination with SVM technique has great potential to provide an effective and accurate diagnostic schema for noninvasive detection of esophageal cancer.
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Xu, Ming; Allenby, Braden; Kim, Junbeum; Kahhat, Ramzy
2009-04-15
The dynamics of an e-commerce market and the associated environmental impacts from a bottom-up perspective using an agent-based model is explored. A conceptual meta-theory from psychology is adopted to form the behavioral rules of artificial consumers choosing different methods of buying a book, including conventional bookstores, e-commerce, and a proposed self-pick-up option. Given the energy and emissions savings that result from a shift to e-commerce from bookstore purchase, it appears that reductions in environmental impacts are relatively probable. Additionally, our results suggest that the shift to e-commerce is mainly due to the growth of Internet users, which ties energy and emissions savings to Internet penetration. Moreover, under any scenario, the energy and emissions savings will be provided by the introduction of the proposed self-pick-up option. Our model thus provides insights into market behaviors and related environmental impacts of the growing use of e-commerce systems at the retail level, and provides a basis for the development and implementation of more sustainable policies and practices.
GPU-accelerated FDTD modeling of radio-frequency field-tissue interactions in high-field MRI.
Chi, Jieru; Liu, Feng; Weber, Ewald; Li, Yu; Crozier, Stuart
2011-06-01
The analysis of high-field RF field-tissue interactions requires high-performance finite-difference time-domain (FDTD) computing. Conventional CPU-based FDTD calculations offer limited computing performance in a PC environment. This study presents a graphics processing unit (GPU)-based parallel-computing framework, producing substantially boosted computing efficiency (with a two-order speedup factor) at a PC-level cost. Specific details of implementing the FDTD method on a GPU architecture have been presented and the new computational strategy has been successfully applied to the design of a novel 8-element transceive RF coil system at 9.4 T. Facilitated by the powerful GPU-FDTD computing, the new RF coil array offers optimized fields (averaging 25% improvement in sensitivity, and 20% reduction in loop coupling compared with conventional array structures of the same size) for small animal imaging with a robust RF configuration. The GPU-enabled acceleration paves the way for FDTD to be applied for both detailed forward modeling and inverse design of MRI coils, which were previously impractical.
Geoelectrical inference of mass transfer parameters using temporal moments
Day-Lewis, Frederick D.; Singha, Kamini
2008-01-01
We present an approach to infer mass transfer parameters based on (1) an analytical model that relates the temporal moments of mobile and bulk concentration and (2) a bicontinuum modification to Archie's law. Whereas conventional geochemical measurements preferentially sample from the mobile domain, electrical resistivity tomography (ERT) is sensitive to bulk electrical conductivity and, thus, electrolytic solute in both the mobile and immobile domains. We demonstrate the new approach, in which temporal moments of collocated mobile domain conductivity (i.e., conventional sampling) and ERT‐estimated bulk conductivity are used to calculate heterogeneous mass transfer rate and immobile porosity fractions in a series of numerical column experiments.
[The development of the ethical thinking in children and the teaching of ethics in pediatrics].
Lejarraga, Horacio
2008-10-01
The child's ethical thinking is not installed in his mind as a single act, but as a consequence of an evolving process. Kohlberg, based on Piaget's studies, described three main developmental stages: preconventional, conventional and post conventional. However, Vigostky and others emphasized the importance of the environment for the moral sculpture of children. Three models can be recognised for teaching ethics in children: the deontological way, the descriptive way, and the only one morally acceptable: the one used by Socrates, by which ethics becomes not merely an adjective, but an institutionalised social practice built on axiological basis.
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Accurate lithography simulation model based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Tumor-on-a-chip platforms for assessing nanoparticle-based cancer therapy.
Wang, Yimin; Cuzzucoli, Fabio; Escobar, Andres; Lu, Siming; Liang, Liguo; Wang, ShuQi
2018-08-17
Cancer has become the most prevalent cause of deaths, placing a huge economic and healthcare burden worldwide. Nanoparticles (NPs), as a key component of nanomedicine, provide alternative options for promoting the efficacy of cancer therapy. Current conventional cancer models have limitations in predicting the effects of various cancer treatments. To overcome these limitations, biomimetic and novel 'tumor-on-a-chip' platforms have emerged with other innovative biomedical engineering methods that enable the evaluation of NP-based cancer therapy. In this review, we first describe cancer models for evaluation of NP-based cancer therapy techniques, and then present the latest advances in 'tumor-on-a-chip' platforms that can potentially facilitate clinical translation of NP-based cancer therapies.
Poland, Bill; Teischinger, Florian
2017-11-01
As suggested by the Food and Drug Administration (FDA) Modified Risk Tobacco Product (MRTP) Applications Draft Guidance, we developed a statistical model based on public data to explore the effect on population mortality of an MRTP resulting in reduced conventional cigarette smoking. Many cigarette smokers who try an MRTP persist as dual users while smoking fewer conventional cigarettes per day (CPD). Lower-CPD smokers have lower mortality risk based on large cohort studies. However, with little data on the effect of smoking reduction on mortality, predictive modeling is needed. We generalize prior assumptions of gradual, exponential decay of Excess Risk (ER) of death, relative to never-smokers, after quitting or reducing CPD. The same age-dependent slopes are applied to all transitions, including initiation to conventional cigarettes and to a second product (MRTP). A Monte Carlo simulation model generates random individual product use histories, including CPD, to project cumulative deaths through 2060 in a population with versus without the MRTP. Transitions are modeled to and from dual use, which affects CPD and cigarette quit rates, and to MRTP use only. Results in a hypothetical scenario showed high sensitivity of long-run mortality to CPD reduction levels and moderate sensitivity to ER transition rates. Models to project population effects of an MRTP should account for possible mortality effects of reduced smoking among dual users. In addition, studies should follow dual-user CPD histories and quit rates over long time periods to clarify long-term usage patterns and thereby improve health impact projections. We simulated mortality effects of a hypothetical MRTP accounting for cigarette smoking reduction by smokers who add MRTP use. Data on relative mortality risk versus CPD suggest that this reduction may have a substantial effect on mortality rates, unaccounted for in other models. This effect is weighed with additional hypothetical effects in an example. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Ben Fathallah, Mohamed Ali; Ben Othman, Afef; Besbes, Mongi
2018-02-01
Photovoltaic energy is very important to meet the consumption needs of electrical energy in remote areas and for other applications. Energy storage systems are essential to avoid the intermittent production of photovoltaic energy and to cover peaks in energy demand. The super capacitor, also known as electrochemical double layer capacitor, is a storage device which has a very high power density compared to conventional battery and is capable of storing a large amount of electrical energy in short time periods, which reflects its interest to be used for the storage of photovoltaic energy. From this principle, this paper represents a three-branch RC model of super capacitor to describe its different dynamics of operation during the charging, discharging and rest phases. After having validated the good functioning of this model with the experimental study of Zubieta, The super capacitor performance has been demonstrated and compared with a conventional battery in a photovoltaic converter chain to power AC machine.
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1982-01-01
The derivation of a simple expression for the capacitance C(V) associated with the transition region of a p-n junction under a forward bias is derived by phenomenological reasoning. The treatment of C(V) is based on the conventional Shockley equations, and simpler expressions for C(V) result that are in general accord with the previous analytical and numerical results. C(V) consists of two components resulting from changes in majority carrier concentration and from free hole and electron accumulation in the space-charge region. The space-charge region is conceived as the intrinsic region of an n-i-p structure for a space-charge region markedly wider than the extrinsic Debye lengths at its edges. This region is excited in the sense that the forward bias creates hole and electron densities orders of magnitude larger than those in equilibrium. The recent Shirts-Gordon (1979) modeling of the space-charge region using a dielectric response function is contrasted with the more conventional Schottky-Shockley modeling.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Wang, Lingling; Fu, Li
2018-01-01
In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323
Sankari, Ziad; Adeli, Hojjat
2011-04-15
Recently, the authors presented an EEG (electroencephalogram) coherence study of the Alzheimer's disease (AD) and found statistically significant differences between AD and control groups. In this paper a probabilistic neural network (PNN) model is presented for classification of AD and healthy controls using features extracted in coherence and wavelet coherence studies on cortical connectivity in AD. The model is verified using EEGs obtained from 20 AD probable patients and 7 healthy/control subjects based on a standard 10-20 electrode configuration on the scalp. It is shown that extracting features from EEG sub-bands using coherence, as a measure of cortical connectivity, can discriminate AD patients from healthy controls effectively when a mixed band classification model is applied. For the data set used a classification accuracy of 100% is achieved using the conventional coherence and a spread parameter of the Gaussian function in a particular range found in this research. Copyright © 2011 Elsevier B.V. All rights reserved.
Reddy, Krishna R; Kumar, Girish; Giri, Rajiv K
2017-05-01
A two-dimensional (2-D) mathematical model is presented to predict the response of municipal solid waste (MSW) of conventional as well as bioreactor landfills undergoing coupled hydro-bio-mechanical processes. The newly developed and validated 2-D coupled mathematical modeling framework combines and simultaneously solves a two-phase flow model based on the unsaturated Richard's equation, a plain-strain formulation of Mohr-Coulomb mechanical model and first-order decay kinetics biodegradation model. The performance of both conventional and bioreactor landfill was investigated holistically, by evaluating the mechanical settlement, extent of waste degradation with subsequent changes in geotechnical properties, landfill slope stability, and in-plane shear behavior (shear stress-displacement) of composite liner system and final cover system. It is concluded that for the given specific conditions considered, bioreactor landfill attained an overall stabilization after a continuous leachate injection of 16years, whereas the stabilization was observed after around 50years of post-closure in conventional landfills, with a total vertical strain of 36% and 37% for bioreactor and conventional landfills, respectively. The significant changes in landfill settlement, the extent of MSW degradation, MSW geotechnical properties, along with their influence on the in-plane shear response of composite liner and final cover system, between the conventional and bioreactor landfills, observed using the mathematical model proposed in this study, corroborates the importance of considering coupled hydro-bio-mechanical processes while designing and predicting the performance of engineered bioreactor landfills. The study underscores the importance of considering the effect of coupled processes while examining the stability and integrity of the liner and cover systems, which form the integral components of a landfill. Moreover, the spatial and temporal variations in the landfill settlement, the stability of landfill slope under pressurized leachate injection conditions and the rapid changes in the MSW properties with degradation emphasizes the complexity of the bioreactor landfill system and the need for understanding the interrelated processes to design and operate stable and effective bioreactor landfills. A detailed discussion on the results obtained from the numerical simulations along with limitations and key challenges in this study are also presented. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the Tuning of High-Resolution NMR Probes
Pöschko, Maria Theresia; Schlagnitweit, Judith; Huber, Gaspard; Nausner, Martin; Horničáková, Michaela; Desvaux, Hervé; Müller, Norbert
2014-01-01
Three optimum conditions for the tuning of NMR probes are compared: the conventional tuning optimum, which is based on radio-frequency pulse efficiency, the spin noise tuning optimum based on the line shape of the spin noise signal, and the newly introduced frequency shift tuning optimum, which minimizes the frequency pushing effect on strong signals. The latter results if the radiation damping feedback field is not in perfect quadrature to the precessing magnetization. According to the conventional RLC (resistor–inductor–capacitor) resonant circuit model, the optima should be identical, but significant deviations are found experimentally at low temperatures, in particular on cryogenically cooled probes. The existence of different optima with respect to frequency pushing and spin noise line shape has important consequences on the nonlinearity of spin dynamics at high polarization levels and the implementation of experiments on cold probes. PMID:25210000
Vision-Based UAV Flight Control and Obstacle Avoidance
2006-01-01
denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion
MEqTrees Telescope and Radio-sky Simulations and CPU Benchmarking
NASA Astrophysics Data System (ADS)
Shanmugha Sundaram, G. A.
2009-09-01
MEqTrees is a Python-based implementation of the classical Measurement Equation, wherein the various 2×2 Jones matrices are parametrized representations in the spatial and sky domains for any generic radio telescope. Customized simulations of radio-source sky models and corrupt Jones terms are demonstrated based on a policy framework, with performance estimates derived for array configurations, ``dirty''-map residuals and processing power requirements for such computations on conventional platforms.
A Pseudo-Vertical Equilibrium Model for Slow Gravity Drainage Dynamics
NASA Astrophysics Data System (ADS)
Becker, Beatrix; Guo, Bo; Bandilla, Karl; Celia, Michael A.; Flemisch, Bernd; Helmig, Rainer
2017-12-01
Vertical equilibrium (VE) models are computationally efficient and have been widely used for modeling fluid migration in the subsurface. However, they rely on the assumption of instant gravity segregation of the two fluid phases which may not be valid especially for systems that have very slow drainage at low wetting phase saturations. In these cases, the time scale for the wetting phase to reach vertical equilibrium can be several orders of magnitude larger than the time scale of interest, rendering conventional VE models unsuitable. Here we present a pseudo-VE model that relaxes the assumption of instant segregation of the two fluid phases by applying a pseudo-residual saturation inside the plume of the injected fluid that declines over time due to slow vertical drainage. This pseudo-VE model is cast in a multiscale framework for vertically integrated models with the vertical drainage solved as a fine-scale problem. Two types of fine-scale models are developed for the vertical drainage, which lead to two pseudo-VE models. Comparisons with a conventional VE model and a full multidimensional model show that the pseudo-VE models have much wider applicability than the conventional VE model while maintaining the computational benefit of the conventional VE model.
Klasmeier, Jörg; Matthies, Michael; Macleod, Matthew; Fenner, Kathrin; Scheringer, Martin; Stroebe, Maximilian; Le Gall, Anne Christine; Mckone, Thomas; Van De Meent, Dik; Wania, Frank
2006-01-01
We propose a multimedia model-based methodology to evaluate whether a chemical substance qualifies as POP-like based on overall persistence (Pov) and potential for long-range transport (LRTP). It relies upon screening chemicals against the Pov and LRTP characteristics of selected reference chemicals with well-established environmental fates. Results indicate that chemicals of high and low concern in terms of persistence and long-range transport can be consistently identified by eight contemporary multimedia models using the proposed methodology. Model results for three hypothetical chemicals illustrate that the model-based classification of chemicals according to Pov and LRTP is not always consistent with the single-media half-life approach proposed by the UNEP Stockholm Convention and thatthe models provide additional insight into the likely long-term hazards associated with chemicals in the environment. We suggest this model-based classification method be adopted as a complement to screening against defined half-life criteria at the initial stages of tiered assessments designed to identify POP-like chemicals and to prioritize further environmental fate studies for new and existing chemicals.
Meier, Matthias S; Stoessel, Franziska; Jungbluth, Niels; Juraske, Ronnie; Schader, Christian; Stolze, Matthias
2015-02-01
Comprehensive assessment tools are needed that reliably describe environmental impacts of different agricultural systems in order to develop sustainable high yielding agricultural production systems with minimal impacts on the environment. Today, Life Cycle Assessment (LCA) is increasingly used to assess and compare the environmental sustainability of agricultural products from conventional and organic agriculture. However, LCA studies comparing agricultural products from conventional and organic farming systems report a wide variation in the resource efficiency of products from these systems. The studies show that impacts per area farmed land are usually less in organic systems, but related to the quantity produced impacts are often higher. We reviewed 34 comparative LCA studies of organic and conventional agricultural products to analyze whether this result is solely due to the usually lower yields in organic systems or also due to inaccurate modeling within LCA. Comparative LCAs on agricultural products from organic and conventional farming systems often do not adequately differentiate the specific characteristics of the respective farming system in the goal and scope definition and in the inventory analysis. Further, often only a limited number of impact categories are assessed within the impact assessment not allowing for a comprehensive environmental assessment. The most critical points we identified relate to the nitrogen (N) fluxes influencing acidification, eutrophication, and global warming potential, and biodiversity. Usually, N-emissions in LCA inventories of agricultural products are based on model calculations. Modeled N-emissions often do not correspond with the actual amount of N left in the system that may result in potential emissions. Reasons for this may be that N-models are not well adapted to the mode of action of organic fertilizers and that N-emission models often are built on assumptions from conventional agriculture leading to even greater deviances for organic systems between the amount of N calculated by emission models and the actual amount of N available for emissions. Improvements are needed regarding a more precise differentiation between farming systems and regarding the development of N emission models that better represent actual N-fluxes within different systems. We recommend adjusting N- and C-emissions during farmyard manure management and farmyard manure fertilization in plant production to the feed ration provided in the animal production of the respective farming system leading to different N- and C-compositions within the excrement. In the future, more representative background data on organic farming systems (e.g. N content of farmyard manure) should be generated and compiled so as to be available for use within LCA inventories. Finally, we recommend conducting consequential LCA - if possible - when using LCA for policy-making or strategic environmental planning to account for different functions of the analyzed farming systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ren, Weiwei; Yang, Tao; Shi, Pengfei; Xu, Chong-yu; Zhang, Ke; Zhou, Xudong; Shao, Quanxi; Ciais, Philippe
2018-06-01
Climate change imposes profound influence on regional hydrological cycle and water security in many alpine regions worldwide. Investigating regional climate impacts using watershed scale hydrological models requires a large number of input data such as topography, meteorological and hydrological data. However, data scarcity in alpine regions seriously restricts evaluation of climate change impacts on water cycle using conventional approaches based on global or regional climate models, statistical downscaling methods and hydrological models. Therefore, this study is dedicated to development of a probabilistic model to replace the conventional approaches for streamflow projection. The probabilistic model was built upon an advanced Bayesian Neural Network (BNN) approach directly fed by the large-scale climate predictor variables and tested in a typical data sparse alpine region, the Kaidu River basin in Central Asia. Results show that BNN model performs better than the general methods across a number of statistical measures. The BNN method with flexible model structures by active indicator functions, which reduce the dependence on the initial specification for the input variables and the number of hidden units, can work well in a data limited region. Moreover, it can provide more reliable streamflow projections with a robust generalization ability. Forced by the latest bias-corrected GCM scenarios, streamflow projections for the 21st century under three RCP emission pathways were constructed and analyzed. Briefly, the proposed probabilistic projection approach could improve runoff predictive ability over conventional methods and provide better support to water resources planning and management under data limited conditions as well as enable a facilitated climate change impact analysis on runoff and water resources in alpine regions worldwide.
Hamed, Rania; Basil, Marwa; AlBaraghthi, Tamadur; Sunoqrot, Suhair; Tarawneh, Ola
2016-12-01
Chronic oral administration of the non-steroidal anti-inflammatory drug, diclofenac diethylamine (DDEA), is often associated with gastrointestinal ulcers and bleeding. As an alternative to oral administration, a nanoemulsion-based gel (NE gel) formulation of DDEA was developed for topical administration. An optimized formulation for the o/w nanoemulsion of oil, surfactant and cosurfactant was selected based on nanoemulsion mean droplet size, clarity, stability, and flowability, and incorporated into the gelling agent Carbopol® 971P. Rheological studies of the DDEA NE gel were conducted and compared to those of conventional DDEA gel and emulgel. The three gels exhibited an elastic behavior, where G' dominated G″ at all frequencies, indicating the formation of strong gels. NE gel exhibited higher G' values than conventional gel and emulgel, which indicated the formation of a stronger gel network. Strat-M® membrane, a synthetic membrane with diffusion characteristics that are well correlated to human skin, was used for the in vitro diffusion studies. The release of DDEA from conventional gel, emulgel and NE gel showed a controlled release pattern over 12 h, which was consistent with the rheological properties of the gels. DDEA release kinetics from the three gels followed super case II transport as fitted by Korsmeyer-Peppas model.
NASA Astrophysics Data System (ADS)
Dehkordi, N. Mahdian; Sadati, N.; Hamzeh, M.
2017-09-01
This paper presents a robust dc-link voltage as well as a current control strategy for a bidirectional interlink converter (BIC) in a hybrid ac/dc microgrid. To enhance the dc-bus voltage control, conventional methods strive to measure and feedforward the load or source power in the dc-bus control scheme. However, the conventional feedforward-based approaches require remote measurement with communications. Moreover, conventional methods suffer from stability and performance issues, mainly due to the use of the small-signal-based control design method. To overcome these issues, in this paper, the power from DG units of the dc subgrid imposed on the BIC is considered an unmeasurable disturbance signal. In the proposed method, in contrast to existing methods, using the nonlinear model of BIC, a robust controller that does not need the remote measurement with communications effectively rejects the impact of the disturbance signal imposed on the BIC's dc-link voltage. To avoid communication links, the robust controller has a plug-and-play feature that makes it possible to add a DG/load to or remove it from the dc subgrid without distorting the hybrid microgrid stability. Finally, Monte Carlo simulations are conducted to confirm the effectiveness of the proposed control strategy in MATLAB/SimPowerSystems software environment.
Cao, Bin; Li, Shuiming; Hu, Run; Zhou, Shengjun; Sun, Yi; Gan, Zhiying; Liu, Sheng
2013-10-21
Current crowding effects (CCEs) on light extraction efficiency (LEE) of conventional GaN-based light-emitting diodes (LEDs) are analyzed through Monte Carlo ray-tracing simulation. The non-uniform radiative power distribution of the active layer of the Monte Carlo model is obtained based on the current spreading theory and rate equation. The simulation results illustrate that CCE around n-pad (n-CCE) has little effect on LEE, while CCE around p-pad (p-CCE) results in a notable LEE droop due to the significant absorption of photons emitted under p-pad. LEE droop is alleviated by a SiO₂ current blocking layer (CBL) and reflective p-pad. Compared to the conventional LEDs without CBL, the simulated LEE of LEDs with CBL at 20 A/cm² and 70 A/cm² is enhanced by 7.7% and 19.0%, respectively. It is further enhanced by 7.6% and 11.4% after employing a reflective p-pad due to decreased absorption. These enhancements are in accordance with the experimental results. Output power of LEDs with CBL is enhanced by 8.7% and 18.2% at 20 A/cm² and 70 A/cm², respectively. And the reflective p-pad results in a further enhancement of 8.9% and 12.7%.
Memristor-based cellular nonlinear/neural network: design, analysis, and applications.
Duan, Shukai; Hu, Xiaofang; Dong, Zhekang; Wang, Lidan; Mazumder, Pinaki
2015-06-01
Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques
NASA Astrophysics Data System (ADS)
Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk
While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Dongqing; Chien Jen, Tien; Li, Tao
2014-01-15
This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domainmore » with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired.« less
Walter, Emily M.; Henderson, Charles R.; Beach, Andrea L.; Williams, Cody T.
2016-01-01
Researchers, administrators, and policy makers need valid and reliable information about teaching practices. The Postsecondary Instructional Practices Survey (PIPS) is designed to measure the instructional practices of postsecondary instructors from any discipline. The PIPS has 24 instructional practice statements and nine demographic questions. Users calculate PIPS scores by an intuitive proportion-based scoring convention. Factor analyses from 72 departments at four institutions (N = 891) support a 2- or 5-factor solution for the PIPS; both models include all 24 instructional practice items and have good model fit statistics. Factors in the 2-factor model include (a) instructor-centered practices, nine items; and (b) student-centered practices, 13 items. Factors in the 5-factor model include (a) student–student interactions, six items; (b) content delivery, four items; (c) formative assessment, five items; (d) student-content engagement, five items; and (e) summative assessment, four items. In this article, we describe our development and validation processes, provide scoring conventions and outputs for results, and describe wider applications of the instrument. PMID:27810868
Development of a numerical model for vehicle-bridge interaction analysis of railway bridges
NASA Astrophysics Data System (ADS)
Kim, Hee Ju; Cho, Eun Sang; Ham, Jun Su; Park, Ki Tae; Kim, Tae Heon
2016-04-01
In the field of civil engineering, analyzing dynamic response was main concern for a long time. These analysis methods can be divided into moving load analysis method and moving mass analysis method, and formulating each an equation of motion has recently been studied after dividing vehicles and bridges. In this study, the numerical method is presented, which can consider the various train types and can solve the equations of motion for a vehicle-bridge interaction analysis by non-iteration procedure through formulating the coupled equations for motion. Also, 3 dimensional accurate numerical models was developed by KTX-vehicle in order to analyze dynamic response characteristics. The equations of motion for the conventional trains are derived, and the numerical models of the conventional trains are idealized by a set of linear springs and dashpots with 18 degrees of freedom. The bridge models are simplified by the 3 dimensional space frame element which is based on the Euler-Bernoulli theory. The rail irregularities of vertical and lateral directions are generated by PSD functions of the Federal Railroad Administration (FRA).
Application of mid-frequency ventilation in an animal model of lung injury: a pilot study.
Mireles-Cabodevila, Eduardo; Chatburn, Robert L; Thurman, Tracy L; Zabala, Luis M; Holt, Shirley J; Swearingen, Christopher J; Heulitt, Mark J
2014-11-01
Mid-frequency ventilation (MFV) is a mode of pressure control ventilation based on an optimal targeting scheme that maximizes alveolar ventilation and minimizes tidal volume (VT). This study was designed to compare the effects of conventional mechanical ventilation using a lung-protective strategy with MFV in a porcine model of lung injury. Our hypothesis was that MFV can maximize ventilation at higher frequencies without adverse consequences. We compared ventilation and hemodynamic outcomes between conventional ventilation and MFV. This was a prospective study of 6 live Yorkshire pigs (10 ± 0.5 kg). The animals were subjected to lung injury induced by saline lavage and injurious conventional mechanical ventilation. Baseline conventional pressure control continuous mandatory ventilation was applied with V(T) = 6 mL/kg and PEEP determined using a decremental PEEP trial. A manual decision support algorithm was used to implement MFV using the same conventional ventilator. We measured P(aCO2), P(aO2), end-tidal carbon dioxide, cardiac output, arterial and venous blood oxygen saturation, pulmonary and systemic vascular pressures, and lactic acid. The MFV algorithm produced the same minute ventilation as conventional ventilation but with lower V(T) (-1 ± 0.7 mL/kg) and higher frequency (32.1 ± 6.8 vs 55.7 ± 15.8 breaths/min, P < .002). There were no differences between conventional ventilation and MFV for mean airway pressures (16.1 ± 1.3 vs 16.4 ± 2 cm H2O, P = .75) even when auto-PEEP was higher (0.6 ± 0.9 vs 2.4 ± 1.1 cm H2O, P = .02). There were no significant differences in any hemodynamic measurements, although heart rate was higher during MFV. In this pilot study, we demonstrate that MFV allows the use of higher breathing frequencies and lower V(T) than conventional ventilation to maximize alveolar ventilation. We describe the ventilatory or hemodynamic effects of MFV. We also demonstrate that the application of a decision support algorithm to manage MFV is feasible. Copyright © 2014 by Daedalus Enterprises.
Linear energy transfer incorporated intensity modulated proton therapy optimization
NASA Astrophysics Data System (ADS)
Cao, Wenhua; Khabazian, Azin; Yepes, Pablo P.; Lim, Gino; Poenisch, Falk; Grosshans, David R.; Mohan, Radhe
2018-01-01
The purpose of this study was to investigate the feasibility of incorporating linear energy transfer (LET) into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. However, if not explicitly incorporated into the optimization criteria, different IMPT plans may yield similar physical dose distributions but greatly different LET, specifically dose-averaged LET, distributions. Conventionally, the IMPT optimization criteria (or cost function) only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. Due to the fractional programming nature of the resulting model, we used a variable reformulation approach so that the optimization process is computationally equivalent to conventional IMPT optimization. In this study, five brain tumor patients who had been treated with proton therapy at our institution were selected. Two plans were created for each patient based on the proposed LET-incorporated optimization (LETOpt) and the conventional dose-based optimization (DoseOpt). The optimized plans were compared in terms of both dose (assuming a constant RBE of 1.1 as adopted in clinical practice) and LET. Both optimization approaches were able to generate comparable dose distributions. The LET-incorporated optimization achieved not only pronounced reduction of LET values in critical organs, such as brainstem and optic chiasm, but also increased LET in target volumes, compared to the conventional dose-based optimization. However, on occasion, there was a need to tradeoff the acceptability of dose and LET distributions. Our conclusion is that the inclusion of LET-dependent criteria in the IMPT optimization could lead to similar dose distributions as the conventional optimization but superior LET distributions in target volumes and normal tissues. This may have substantial advantages in improving tumor control and reducing normal tissue toxicities.
NASA Astrophysics Data System (ADS)
Sarofim, M. C.
2007-12-01
Emissions of greenhouses gases and conventional pollutants are closely linked through shared generation processes and thus policies directed toward long-lived greenhouse gases affect emissions of conventional pollutants and, similarly, policies directed toward conventional pollutants affect emissions of greenhouse gases. Some conventional pollutants such as aerosols also have direct radiative effects. NOx and VOCs are ozone precursors, another substance with both radiative and health impacts, and these ozone precursors also interact with the chemistry of the hydroxyl radical which is the major methane sink. Realistic scenarios of future emissions and concentrations must therefore account for both air pollution and greenhouse gas policies and how they interact economically as well as atmospherically, including the regional pattern of emissions and regulation. We have modified a 16 region computable general equilibrium economic model (the MIT Emissions Prediction and Policy Analysis model) by including elasticities of substitution for ozone precursors and aerosols in order to examine these interactions between climate policy and air pollution policy on a global scale. Urban emissions are distributed based on population density, and aged using a reduced form urban model before release into an atmospheric chemistry/climate model (the earth systems component of the MIT Integrated Global Systems Model). This integrated approach enables examination of the direct impacts of air pollution on climate, the ancillary and complementary interactions between air pollution and climate policies, and the impact of different population distribution algorithms or urban emission aging schemes on global scale properties. This modeling exercise shows that while ozone levels are reduced due to NOx and VOC reductions, these reductions lead to an increase in methane concentrations that eliminates the temperature effects of the ozone reductions. However, black carbon reductions do have significant direct effects on global mean temperatures, as do ancillary reductions of greenhouse gases due to the pollution constraints imposed in the economic model. Finally, we show that the economic benefits of coordinating air pollution and climate policies rather than separate implementation are on the order of 20% of the total policy cost.
[Navigated drilling for femoral head necrosis. Experimental and clinical results].
Beckmann, J; Tingart, M; Perlick, L; Lüring, C; Grifka, J; Anders, S
2007-05-01
In the early stages of osteonecrosis of the femoral head, core decompression by exact drilling into the ischemic areas can reduce pain and achieve reperfusion. Using computer aided surgery, the precision of the drilling can be improved while simultaneously lowering radiation exposure time for both staff and patients. We describe the experimental and clinical results of drilling under the guidance of the fluoroscopically-based VectorVision navigation system (BrainLAB, Munich, Germany). A total of 70 sawbones were prepared mimicking an osteonecrosis of the femoral head. In two experimental models, bone only and obesity, as well as in a clinical setting involving ten patients with osteonecrosis of the femoral head, the precision and the duration of radiation exposure were compared between the VectorVision system and conventional drilling. No target was missed. For both models, there was a statistically significant difference in terms of the precision, the number of drilling corrections as well as the radiation exposure time. The average distance to the desired midpoint of the lesion of both models was 0.48 mm for navigated drilling and 1.06 mm for conventional drilling, the average drilling corrections were 0.175 and 2.1, and the radiation exposure time less than 1 s and 3.6 s, respectively. In the clinical setting, the reduction of radiation exposure (below 1 s for navigation compared to 56 s for the conventional technique) as well as of drilling corrections (0.2 compared to 3.4) was also significant. Computer guided drilling using the fluoroscopically based VectorVision navigation system shows a clearly improved precision with a enormous simultaneous reduction in radiation exposure. It is therefore recommended for clinical routine.
Acid-base behavior of the gaspeite (NiCO3(s)) surface in NaCl solutions.
Villegas-Jiménez, Adrián; Mucci, Alfonso; Pokrovsky, Oleg S; Schott, Jacques
2010-08-03
Gaspeite is a low reactivity, rhombohedral carbonate mineral and a suitable surrogate to investigate the surface properties of other more ubiquitous carbonate minerals, such as calcite, in aqueous solutions. In this study, the acid-base properties of the gaspeite surface were investigated over a pH range of 5 to 10 in NaCl solutions (0.001, 0.01, and 0.1 M) at near ambient conditions (25 +/- 3 degrees C and 1 atm) by means of conventional acidimetric and alkalimetric titration techniques and microelectrophoresis. Over the entire experimental pH range, surface protonation and electrokinetic mobility are strongly affected by the background electrolyte, leading to a significant decrease of the pH of zero net proton charge (PZNPC) and the pH of isoelectric point (pH(iep)) at increasing NaCl concentrations. This challenges the conventional idea that carbonate mineral surfaces are chemically inert to background electrolyte ions. Multiple sets of surface complexation reactions (i.e., ionization and ion adsorption) were formulated within the framework of three electrostatic models (CCM, BSM, and TLM) and their ability to simulate proton adsorption and electrokinetic data was evaluated. A one-site, 3-pK, constant capacitance surface complexation model (SCM) reproduces the proton adsorption data at all ionic strengths and qualitatively predicts the electrokinetic behavior of gaspeite suspensions. Nevertheless, the strong ionic strength dependence exhibited by the optimized SCM parameters reveals that the influence of the background electrolyte on the surface reactivity of gaspeite is not fully accounted for by conventional electrostatic and surface complexation models and suggests that future refinements to the underlying theories are warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Hantao; Li, Fangxing; Fang, Xin
Our paper deals with extended-term energy storage (ES) arbitrage problems to maximize the annual revenue in deregulated power systems with high penetration wind power. The conventional ES arbitrage model takes the locational marginal prices (LMP) as an input and is unable to account for the impacts of ES operations on system LMPs. This paper proposes a bi-level ES arbitrage model, where the upper level maximizes the ES arbitrage revenue and the lower level simulates the market clearing process considering wind power and ES. The bi-level model is formulated as a mathematical program with equilibrium constraints (MPEC) and then recast intomore » a mixed-integer linear programming (MILP) using strong duality theory. Wind power fluctuations are characterized by the GARCH forecast model and the forecast error is modeled by forecast-bin based Beta distributions. Case studies are performed on a modified PJM 5-bus system and an IEEE 118-bus system with a weekly time horizon over an annual term to show the validity of the proposed bi-level model. The results from the conventional model and the bi-level model are compared under different ES power and energy ratings, and also various load and wind penetration levels.« less
Cui, Hantao; Li, Fangxing; Fang, Xin; ...
2017-10-04
Our paper deals with extended-term energy storage (ES) arbitrage problems to maximize the annual revenue in deregulated power systems with high penetration wind power. The conventional ES arbitrage model takes the locational marginal prices (LMP) as an input and is unable to account for the impacts of ES operations on system LMPs. This paper proposes a bi-level ES arbitrage model, where the upper level maximizes the ES arbitrage revenue and the lower level simulates the market clearing process considering wind power and ES. The bi-level model is formulated as a mathematical program with equilibrium constraints (MPEC) and then recast intomore » a mixed-integer linear programming (MILP) using strong duality theory. Wind power fluctuations are characterized by the GARCH forecast model and the forecast error is modeled by forecast-bin based Beta distributions. Case studies are performed on a modified PJM 5-bus system and an IEEE 118-bus system with a weekly time horizon over an annual term to show the validity of the proposed bi-level model. The results from the conventional model and the bi-level model are compared under different ES power and energy ratings, and also various load and wind penetration levels.« less
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Effect of electron reflection on magnetized plasma sheath in an oblique magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ting-Ting; Ma, J. X., E-mail: jxma@ustc.edu.cn; Wei, Zi-An
Magnetized plasma sheaths in an oblique magnetic field were extensively investigated by conventionally assuming Boltzmann relation for electron density. This article presents the study of the magnetized sheath without using the Boltzmann relation but by considering the electron reflection along the magnetic field lines caused by the negative sheath potential. A generalized Bohm criterion is analytically derived, and sheath profiles are numerically obtained, which are compared with the results of the conventional model. The results show that the ion Mach number at the sheath edge normal to the wall has a strong dependence on the wall potential, which differs significantlymore » from the conventional model in which the Mach number is independent of the wall potential. The floating wall potential is lower in the present model than that in the conventional model. Furthermore, the sheath profiles are appreciably narrower in the present model when the wall bias is low, but approach the result of the conventional model when the wall bias is high. The sheath thickness decreases with the increase of ion-to-electron temperature ratio and magnetic field strength but has a complex relationship with the angle of the magnetic field.« less
A Graphic Anthropometric Aid for Seating and Workplace Design.
1984-04-01
required proportion of the pdf . Suppose that some attribute is distributed according to a bivariate Normal pdf of zero mean value and equal variances a...2 Note that circular contours. dran at the normaliwed radii presented above, will enclose the respective proportions of the bi artate Normal pdf ...INTRODUCTION 1 2. A TWO-DIMENSIONAL MODEL BASE 2 3. CONCEPT OF USE 4 4. VALIDATION OF THE TWO-DIMENSIONAL MODEL 8 4.1 Conventional Anthropometry 9 4.2
Static analysis of a sonar dome rubber window
NASA Technical Reports Server (NTRS)
Lai, J. L.
1978-01-01
The application of NASTRAN (level 16.0.1) to the static analysis of a sonar dome rubber window (SDRW) was demonstrated. The assessment of the conventional model (neglecting the enclosed fluid) for the stress analysis of the SDRW was made by comparing its results to those based on a sophisticated model (including the enclosed fluid). The fluid was modeled with isoparametric linear hexahedron elements with approximate material properties whose shear modulus was much smaller than its bulk modulus. The effect of the chosen material property for the fluid is discussed.
A new modelling approach for zooplankton behaviour
NASA Astrophysics Data System (ADS)
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Preventive Screening of Women Who Use Complementary and Alternative Medicine Providers
Tyree, Patrick T.; Lafferty, William E.
2009-01-01
Abstract Background Many women use complementary and alternative medicine (CAM). Although CAM use has been associated with reductions in conventionally recommended pediatric preventive care (e.g., vaccination), little is known about associations between CAM use and receipt of recommended preventive screening in women. Methods Using Washington State insurance data from 2000 to 2003, the authors generated clustered logistic regression models, examining associations between provider-based CAM use and receipt of screening tests for Chlamydia trachomatis, breast cancer, and cervical cancer: (1) contrasting women who used CAM providers only (alternative use) and women who used both conventional and CAM providers (complementary use) with women who used conventional care only and (2) testing associations between screening and use of four specific CAM provider types—naturopathic physicians, chiropractors, massage therapists, and acupuncturists. Results Both alternative and complementary use was associated with reduced Chlamydia screening. Cancer screening increased with complementary use but decreased with alternative use of CAM. Use of naturopathy was associated with decreased mammography, whereas all four CAM therapies were positively associated with Papanicolaou testing. Conclusions When used in conjunction with conventional care, use of provider-based CAM may signal high interest in various types of health-promoting behavior, including cancer screening. Negative associations between CAM and Chlamydia screening and between naturopathy and mammography require additional study. Interventions with CAM providers and their patients, aimed at improving rates of conventionally recommended screening, might encourage greater focus on preventive care, an important task when CAM providers serve as women's only contact with the healthcare system. PMID:19630554
Towards a model of loss navigation in adolescence.
Lytje, Martin
2017-01-01
Researchers today consider childhood bereavement one of the most traumatic experiences that can befall a child. Nevertheless, most models of bereavement currently limit themselves to dealing with adult grief and primarily explores the internal processes associated with recovery. Based on a study which conducted focus groups with 39 Danish adolescents (aged 9-17), this article presents The Model of Loss Navigation in Adolescence. Centered on the three factors-Being Different, Being in Control, and Being in Grief-the model highlight the social conventions children have to navigate and how these influences both their day-to-day lives and their road to recovery.
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.