Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth
2016-11-30
Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Starr, David
1999-01-01
The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program homepage i/e.: http://ospso.gsfc.nasa.gov/validation/valpage.html).
Lyapunov dimension formula for the global attractor of the Lorenz system
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.; Korzhemanova, N. A.; Kusakin, D. V.
2016-12-01
The exact Lyapunov dimension formula for the Lorenz system for a positive measure set of parameters, including classical values, was analytically obtained first by G.A. Leonov in 2002. Leonov used the construction technique of special Lyapunov-type functions, which was developed by him in 1991 year. Later it was shown that the consideration of larger class of Lyapunov-type functions permits proving the validity of this formula for all parameters, of the system, such that all the equilibria of the system are hyperbolically unstable. In the present work it is proved the validity of the formula for Lyapunov dimension for a wider variety of parameters values including all parameters, which satisfy the classical physical limitations.
Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, Lorraine; Einaudi, Franco (Technical Monitor)
2001-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Results and Validation of MODIS Aerosol Retrievals over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)
2000-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Update of Standard Practices for New Method Validation in Forensic Toxicology.
Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T
2017-01-01
International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Mulder, Leontine; van der Molen, Renate; Koelman, Carin; van Leeuwen, Ester; Roos, Anja; Damoiseaux, Jan
2018-05-01
ISO 15189:2012 requires validation of methods used in the medical laboratory, and lists a series of performance parameters for consideration to include. Although these performance parameters are feasible for clinical chemistry analytes, application in the validation of autoimmunity tests is a challenge. Lack of gold standards or reference methods in combination with the scarcity of well-defined diagnostic samples of patients with rare diseases make validation of new assays difficult. The present manuscript describes the initiative of Dutch medical immunology laboratory specialists to combine efforts and perform multi-center validation studies of new assays in the field of autoimmunity. Validation data and reports are made available to interested Dutch laboratory specialists. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Starr, David
2000-01-01
The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community.
Karabagias, Ioannis K; Karabournioti, Sofia
2018-05-03
Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014⁻2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin ( p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone.
Karabournioti, Sofia
2018-01-01
Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014–2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin (p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone. PMID:29751543
Müller, Martin; Seidenberg, Ruth; Schuh, Sabine K; Exadaktylos, Aristomenis K; Schechter, Clyde B; Leichtle, Alexander B; Hautz, Wolf E
2018-01-01
Patients presenting with suspected urinary tract infection are common in every day emergency practice. Urine flow cytometry has replaced microscopic urine evaluation in many emergency departments, but interpretation of the results remains challenging. The aim of this study was to develop and validate tools that predict urine culture growth out of urine flow cytometry parameter. This retrospective study included all adult patients that presented in a large emergency department between January and July 2017 with a suspected urinary tract infection and had a urine flow cytometry as well as a urine culture obtained. The objective was to identify urine flow cytometry parameters that reliably predict urine culture growth and mixed flora growth. The data set was split into a training (70%) and a validation set (30%) and different decision-making approaches were developed and validated. Relevant urine culture growth (respectively mixed flora growth) was found in 40.2% (7.2% respectively) of the 613 patients included. The number of leukocytes and bacteria in flow cytometry were highly associated with urine culture growth, but mixed flora growth could not be sufficiently predicted from the urine flow cytometry parameters. A decision tree, predictive value figures, a nomogram, and a cut-off table to predict urine culture growth from bacteria and leukocyte count were developed, validated and compared. Urine flow cytometry parameters are insufficient to predict mixed flora growth. However, the prediction of urine culture growth based on bacteria and leukocyte count is highly accurate and the developed tools should be used as part of the decision-making process of ordering a urine culture or starting an antibiotic therapy if a urogenital infection is suspected.
Seidenberg, Ruth; Schuh, Sabine K.; Exadaktylos, Aristomenis K.; Schechter, Clyde B.; Leichtle, Alexander B.; Hautz, Wolf E.
2018-01-01
Objective Patients presenting with suspected urinary tract infection are common in every day emergency practice. Urine flow cytometry has replaced microscopic urine evaluation in many emergency departments, but interpretation of the results remains challenging. The aim of this study was to develop and validate tools that predict urine culture growth out of urine flow cytometry parameter. Methods This retrospective study included all adult patients that presented in a large emergency department between January and July 2017 with a suspected urinary tract infection and had a urine flow cytometry as well as a urine culture obtained. The objective was to identify urine flow cytometry parameters that reliably predict urine culture growth and mixed flora growth. The data set was split into a training (70%) and a validation set (30%) and different decision-making approaches were developed and validated. Results Relevant urine culture growth (respectively mixed flora growth) was found in 40.2% (7.2% respectively) of the 613 patients included. The number of leukocytes and bacteria in flow cytometry were highly associated with urine culture growth, but mixed flora growth could not be sufficiently predicted from the urine flow cytometry parameters. A decision tree, predictive value figures, a nomogram, and a cut-off table to predict urine culture growth from bacteria and leukocyte count were developed, validated and compared. Conclusions Urine flow cytometry parameters are insufficient to predict mixed flora growth. However, the prediction of urine culture growth based on bacteria and leukocyte count is highly accurate and the developed tools should be used as part of the decision-making process of ordering a urine culture or starting an antibiotic therapy if a urogenital infection is suspected. PMID:29474463
Rezende, Vinícius Marcondes; Rivellis, Ariane Julio; Gomes, Melissa Medrano; Dörr, Felipe Augusto; Novaes, Mafalda Megumi Yoshinaga; Nardinelli, Luciana; Costa, Ariel Lais de Lima; Chamone, Dalton de Alencar Fisher; Bendit, Israel
2013-01-01
Objective The goal of this study was to monitor imatinib mesylate therapeutically in the Tumor Biology Laboratory, Department of Hematology and Hemotherapy, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo (USP). A simple and sensitive method to quantify imatinib and its metabolite (CGP74588) in human serum was developed and fully validated in order to monitor treatment compliance. Methods The method used to quantify these compounds in serum included protein precipitation extraction followed by instrumental analysis using high performance liquid chromatography coupled with mass spectrometry. The method was validated for several parameters, including selectivity, precision, accuracy, recovery and linearity. Results The parameters evaluated during the validation stage exhibited satisfactory results based on the Food and Drug Administration and the Brazilian Health Surveillance Agency (ANVISA) guidelines for validating bioanalytical methods. These parameters also showed a linear correlation greater than 0.99 for the concentration range between 0.500 µg/mL and 10.0 µg/mL and a total analysis time of 13 minutes per sample. This study includes results (imatinib serum concentrations) for 308 samples from patients being treated with imatinib mesylate. Conclusion The method developed in this study was successfully validated and is being efficiently used to measure imatinib concentrations in samples from chronic myeloid leukemia patients to check treatment compliance. The imatinib serum levels of patients achieving a major molecular response were significantly higher than those of patients who did not achieve this result. These results are thus consistent with published reports concerning other populations. PMID:23741187
Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Noh, Kyoung Jin; Shim, Hackjoon; Seol, Hae Young
2017-05-01
We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ c ) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP.
Simulation verification techniques study. Subsystem simulation validation techniques
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1974-01-01
Techniques for validation of software modules which simulate spacecraft onboard systems are discussed. An overview of the simulation software hierarchy for a shuttle mission simulator is provided. A set of guidelines for the identification of subsystem/module performance parameters and critical performance parameters are presented. Various sources of reference data to serve as standards of performance for simulation validation are identified. Environment, crew station, vehicle configuration, and vehicle dynamics simulation software are briefly discussed from the point of view of their interfaces with subsystem simulation modules. A detailed presentation of results in the area of vehicle subsystems simulation modules is included. A list of references, conclusions and recommendations are also given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yuanyuan; Diao, Ruisheng; Huang, Renke
Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less
2014-04-15
SINGLE CYLINDER DIESEL ENGINE Amit Shrestha, Umashankar Joshi, Ziliang Zheng, Tamer Badawy, Naeim A. Henein, Wayne State University, Detroit, MI, USA...13-03-2014 4. TITLE AND SUBTITLE EXPERIMENTAL VALIDATION AND COMBUSTION MODELING OF A JP-8 SURROGATE IN A SINGLE CYLINDER DIESEL ENGINE 5a...INTERNATIONAL UNCLASSIFIED • Validate a two-component JP-8 surrogate in a single cylinder diesel engine. Validation parameters include – Ignition delay
NASA Technical Reports Server (NTRS)
Minnis, P.; Sun-Mack, S.; Bedka, K. M.; Yost, C. R.; Trepte, Q. Z.; Smith, W. L., Jr.; Painemal, D.; Chen, Y.; Palikonda, R.; Dong, X.;
2016-01-01
Validation is a key component of remote sensing that can take many different forms. The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) is applied to many different imager datasets including those from the geostationary satellites, Meteosat, Himiwari-8, INSAT-3D, GOES, and MTSAT, as well as from the low-Earth orbiting satellite imagers, MODIS, AVHRR, and VIIRS. While each of these imagers have similar sets of channels with wavelengths near 0.65, 3.7, 11, and 12 micrometers, many differences among them can lead to discrepancies in the retrievals. These differences include spatial resolution, spectral response functions, viewing conditions, and calibrations, among others. Even when analyzed with nearly identical algorithms, it is necessary, because of those discrepancies, to validate the results from each imager separately in order to assess the uncertainties in the individual parameters. This paper presents comparisons of various SatCORPS-retrieved cloud parameters with independent measurements and retrievals from a variety of instruments. These include surface and space-based lidar and radar data from CALIPSO and CloudSat, respectively, to assess the cloud fraction, height, base, optical depth, and ice water path; satellite and surface microwave radiometers to evaluate cloud liquid water path; surface-based radiometers to evaluate optical depth and effective particle size; and airborne in-situ data to evaluate ice water content, effective particle size, and other parameters. The results of comparisons are compared and contrasted and the factors influencing the differences are discussed.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V
2017-03-01
A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Larrosa, José Manuel; Moreno-Montañés, Javier; Martinez-de-la-Casa, José María; Polo, Vicente; Velázquez-Villoria, Álvaro; Berrozpe, Clara; García-Granero, Marta
2015-10-01
The purpose of this study was to develop and validate a multivariate predictive model to detect glaucoma by using a combination of retinal nerve fiber layer (RNFL), retinal ganglion cell-inner plexiform (GCIPL), and optic disc parameters measured using spectral-domain optical coherence tomography (OCT). Five hundred eyes from 500 participants and 187 eyes of another 187 participants were included in the study and validation groups, respectively. Patients with glaucoma were classified in five groups based on visual field damage. Sensitivity and specificity of all glaucoma OCT parameters were analyzed. Receiver operating characteristic curves (ROC) and areas under the ROC (AUC) were compared. Three predictive multivariate models (quantitative, qualitative, and combined) that used a combination of the best OCT parameters were constructed. A diagnostic calculator was created using the combined multivariate model. The best AUC parameters were: inferior RNFL, average RNFL, vertical cup/disc ratio, minimal GCIPL, and inferior-temporal GCIPL. Comparisons among the parameters did not show that the GCIPL parameters were better than those of the RNFL in early and advanced glaucoma. The highest AUC was in the combined predictive model (0.937; 95% confidence interval, 0.911-0.957) and was significantly (P = 0.0001) higher than the other isolated parameters considered in early and advanced glaucoma. The validation group displayed similar results to those of the study group. Best GCIPL, RNFL, and optic disc parameters showed a similar ability to detect glaucoma. The combined predictive formula improved the glaucoma detection compared to the best isolated parameters evaluated. The diagnostic calculator obtained good classification from participants in both the study and validation groups.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
NASA Astrophysics Data System (ADS)
Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.
2016-07-01
The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Dolatabadi, Elham; Taati, Babak; Mihailidis, Alex
2016-09-01
This paper presents a study to evaluate the concurrent validity of the Microsoft Kinect for Windows v2 for measuring the spatiotemporal parameters of gait. Twenty healthy adults performed several sequences of walks across a GAITRite mat under three different conditions: usual pace, fast pace, and dual task. Each walking sequence was simultaneously captured with two Kinect for Windows v2 and the GAITRite system. An automated algorithm was employed to extract various spatiotemporal features including stance time, step length, step time and gait velocity from the recorded Kinect v2 sequences. Accuracy in terms of reliability, concurrent validity and limits of agreement was examined for each gait feature under different walking conditions. The 95% Bland-Altman limits of agreement were narrow enough for the Kinect v2 to be a valid tool for measuring all reported spatiotemporal parameters of gait in all three conditions. An excellent intraclass correlation coefficient (ICC2, 1) ranging from 0.9 to 0.98 was observed for all gait measures across different walking conditions. The inter trial reliability of all gait parameters were shown to be strong for all walking types (ICC3, 1 > 0.73). The results of this study suggest that the Kinect for Windows v2 has the capacity to measure selected spatiotemporal gait parameters for healthy adults. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Wieske, Luuk; Witteveen, Esther; Verhamme, Camiel; Dettling-Ihnenfeldt, Daniela S; van der Schaaf, Marike; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2014-01-01
An early diagnosis of Intensive Care Unit-acquired weakness (ICU-AW) using muscle strength assessment is not possible in most critically ill patients. We hypothesized that development of ICU-AW can be predicted reliably two days after ICU admission, using patient characteristics, early available clinical parameters, laboratory results and use of medication as parameters. Newly admitted ICU patients mechanically ventilated ≥2 days were included in this prospective observational cohort study. Manual muscle strength was measured according to the Medical Research Council (MRC) scale, when patients were awake and attentive. ICU-AW was defined as an average MRC score <4. A prediction model was developed by selecting predictors from an a-priori defined set of candidate predictors, based on known risk factors. Discriminative performance of the prediction model was evaluated, validated internally and compared to the APACHE IV and SOFA score. Of 212 included patients, 103 developed ICU-AW. Highest lactate levels, treatment with any aminoglycoside in the first two days after admission and age were selected as predictors. The area under the receiver operating characteristic curve of the prediction model was 0.71 after internal validation. The new prediction model improved discrimination compared to the APACHE IV and the SOFA score. The new early prediction model for ICU-AW using a set of 3 easily available parameters has fair discriminative performance. This model needs external validation.
Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes
NASA Astrophysics Data System (ADS)
Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd
2016-04-01
In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
Cross-validation pitfalls when selecting and assessing regression and classification models.
Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon
2014-03-29
We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.
Jenkins, Kathy J; Koch Kupiec, Jennifer; Owens, Pamela L; Romano, Patrick S; Geppert, Jeffrey J; Gauvreau, Kimberlee
2016-05-20
The National Quality Forum previously approved a quality indicator for mortality after congenital heart surgery developed by the Agency for Healthcare Research and Quality (AHRQ). Several parameters of the validated Risk Adjustment for Congenital Heart Surgery (RACHS-1) method were included, but others differed. As part of the National Quality Forum endorsement maintenance process, developers were asked to harmonize the 2 methodologies. Parameters that were identical between the 2 methods were retained. AHRQ's Healthcare Cost and Utilization Project State Inpatient Databases (SID) 2008 were used to select optimal parameters where differences existed, with a goal to maximize model performance and face validity. Inclusion criteria were not changed and included all discharges for patients <18 years with International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes for congenital heart surgery or nonspecific heart surgery combined with congenital heart disease diagnosis codes. The final model includes procedure risk group, age (0-28 days, 29-90 days, 91-364 days, 1-17 years), low birth weight (500-2499 g), other congenital anomalies (Clinical Classifications Software 217, except for 758.xx), multiple procedures, and transfer-in status. Among 17 945 eligible cases in the SID 2008, the c statistic for model performance was 0.82. In the SID 2013 validation data set, the c statistic was 0.82. Risk-adjusted mortality rates by center ranged from 0.9% to 4.1% (5th-95th percentile). Congenital heart surgery programs can now obtain national benchmarking reports by applying AHRQ Quality Indicator software to hospital administrative data, based on the harmonized RACHS-1 method, with high discrimination and face validity. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.
Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad
2016-07-15
The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Ivankovic, D.; Dadic, V.
2009-04-01
Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.
NASA Technical Reports Server (NTRS)
Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.
1996-01-01
This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Diehl, K; Görig, T; Breitbart, E W; Greinert, R; Hillhouse, J J; Stapleton, J L; Schneider, S
2018-01-01
Evidence suggests that indoor tanning may have addictive properties. However, many instruments for measuring indoor tanning addiction show poor validity and reliability. Recently, a new instrument, the Behavioral Addiction Indoor Tanning Screener (BAITS), has been developed. To test the validity and reliability of the BAITS by using a multimethod approach. We used data from the first wave of the National Cancer Aid Monitoring on Sunbed Use, which included a cognitive pretest (August 2015) and a Germany-wide representative survey (October to December 2015). In the cognitive pretest 10 users of tanning beds were interviewed and 3000 individuals aged 14-45 years were included in the representative survey. Potential symptoms of indoor tanning addiction were measured using the BAITS, a brief screening survey with seven items (answer categories: yes vs. no). Criterion validity was assessed by comparing the results of BAITS with usage parameters. Additionally, we tested internal consistency and construct validity. A total of 19·7% of current and 1·8% of former indoor tanning users were screened positive for symptoms of a potential indoor tanning addiction. We found significant associations between usage parameters and the BAITS (criterion validity). Internal consistency (reliability) was good (Kuder-Richardson-20, 0·854). The BAITS was shown to be a homogeneous construct (construct validity). Compared with other short instruments measuring symptoms of a potential indoor tanning addiction, the BAITS seems to be a valid and reliable tool. With its short length and the binary items the BAITS is easy to use in large surveys. © 2017 British Association of Dermatologists.
NASA Astrophysics Data System (ADS)
Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.
2015-08-01
Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Sessa, Luca; Perrenot, Cyril; Xu, Song; Hubert, Jacques; Bresler, Laurent; Brunaud, Laurent; Perez, Manuela
2018-03-01
In robotic surgery, the coordination between the console-side surgeon and bed-side assistant is crucial, more than in standard surgery or laparoscopy where the surgical team works in close contact. Xperience™ Team Trainer (XTT) is a new optional component for the dv-Trainer ® platform and simulates the patient-side working environment. We present preliminary results for face, content, and the workload imposed regarding the use of the XTT virtual reality platform for the psychomotor and communication skills training of the bed-side assistant in robot-assisted surgery. Participants were categorized into "Beginners" and "Experts". They tested a series of exercises (Pick & Place Laparoscopic Demo, Pick & Place 2 and Team Match Board 1) and completed face validity questionnaires. "Experts" assessed content validity on another questionnaire. All the participants completed a NASA Task Load Index questionnaire to assess the workload imposed by XTT. Twenty-one consenting participants were included (12 "Beginners" and 9 "Experts"). XTT was shown to possess face and content validity, as evidenced by the rankings given on the simulator's ease of use and realism parameters and on the simulator's usefulness for training. Eight out of nine "Experts" judged the visualization of metrics after the exercises useful. However, face validity has shown some weaknesses regarding interactions and instruments. Reasonable workload parameters were registered. XTT demonstrated excellent face and content validity with acceptable workload parameters. XTT could become a useful tool for robotic surgery team training.
Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.
2008-01-01
A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Calibration of a rotating accelerometer gravity gradiometer using centrifugal gradients
NASA Astrophysics Data System (ADS)
Yu, Mingbiao; Cai, Tijing
2018-05-01
The purpose of this study is to calibrate scale factors and equivalent zero biases of a rotating accelerometer gravity gradiometer (RAGG). We calibrate scale factors by determining the relationship between the centrifugal gradient excitation and RAGG response. Compared with calibration by changing the gravitational gradient excitation, this method does not need test masses and is easier to implement. The equivalent zero biases are superpositions of self-gradients and the intrinsic zero biases of the RAGG. A self-gradient is the gravitational gradient produced by surrounding masses, and it correlates well with the RAGG attitude angle. We propose a self-gradient model that includes self-gradients and the intrinsic zero biases of the RAGG. The self-gradient model is a function of the RAGG attitude, and it includes parameters related to surrounding masses. The calibration of equivalent zero biases determines the parameters of the self-gradient model. We provide detailed procedures and mathematical formulations for calibrating scale factors and parameters in the self-gradient model. A RAGG physical simulation system substitutes for the actual RAGG in the calibration and validation experiments. Four point masses simulate four types of surrounding masses producing self-gradients. Validation experiments show that the self-gradients predicted by the self-gradient model are consistent with those from the outputs of the RAGG physical simulation system, suggesting that the presented calibration method is valid.
Utility of pedometers for assessing physical activity: construct validity.
Tudor-Locke, Catrine; Williams, Joel E; Reis, Jared P; Pluto, Delores
2004-01-01
Valid assessment of physical activity is necessary to fully understand this important health-related behaviour for research, surveillance, intervention and evaluation purposes. This article is the second in a companion set exploring the validity of pedometer-assessed physical activity. The previous article published in Sports Medicine dealt with convergent validity (i.e. the extent to which an instrument's output is associated with that of other instruments intended to measure the same exposure of interest). The present focus is on construct validity. Construct validity is the extent to which the measurement corresponds with other measures of theoretically-related parameters. Construct validity is typically evaluated by correlational analysis, that is, the magnitude of concordance between two measures (e.g. pedometer-determined steps/day and a theoretically-related parameter such as age, anthropometric measures and fitness). A systematic literature review produced 29 articles published since > or =1980 directly relevant to construct validity of pedometers in relation to age, anthropometric measures and fitness. Reported correlations were combined and a median r-value was computed. Overall, there was a weak inverse relationship (median r = -0.21) between age and pedometer-determined physical activity. A weak inverse relationship was also apparent with both body mass index and percentage overweight (median r = -0.27 and r = -0.22, respectively). Positive relationships regarding indicators of fitness ranged from weak to moderate depending on the fitness measure utilised: 6-minute walk test (median r = 0.69), timed treadmill test (median r = 0.41) and estimated maximum oxygen uptake (median r = 0.22). Studies are warranted to assess the relationship of pedometer-determined physical activity with other important health-related outcomes including blood pressure and physiological parameters such as blood glucose and lipid profiles. The aggregated evidence of convergent validity (presented in the previous companion article) and construct validity herein provides support for considering simple and inexpensive pedometers in both research and practice.
NASA Astrophysics Data System (ADS)
Raj, Rahul; Hamm, Nicholas Alexander Samuel; van der Tol, Christiaan; Stein, Alfred
2016-03-01
Gross primary production (GPP) can be separated from flux tower measurements of net ecosystem exchange (NEE) of CO2. This is used increasingly to validate process-based simulators and remote-sensing-derived estimates of simulated GPP at various time steps. Proper validation includes the uncertainty associated with this separation. In this study, uncertainty assessment was done in a Bayesian framework. It was applied to data from the Speulderbos forest site, The Netherlands. We estimated the uncertainty in GPP at half-hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements. The NRH model provides a robust empirical relationship between radiation and GPP. It includes the degree of curvature of the light response curve, radiation and temperature. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. We defined the prior distribution of each NRH parameter and used Markov chain Monte Carlo (MCMC) simulation to estimate the uncertainty in the separated GPP from the posterior distribution at half-hourly time steps. This time series also allowed us to estimate the uncertainty at daily time steps. We compared the informative with the non-informative prior distributions of the NRH parameters and found that both choices produced similar posterior distributions of GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.
2014-06-01
We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
Support vector machines and generalisation in HEP
NASA Astrophysics Data System (ADS)
Bevan, Adrian; Gamboa Goñi, Rodrigo; Hays, Jon; Stevenson, Tom
2017-10-01
We review the concept of Support Vector Machines (SVMs) and discuss examples of their use in a number of scenarios. Several SVM implementations have been used in HEP and we exemplify this algorithm using the Toolkit for Multivariate Analysis (TMVA) implementation. We discuss examples relevant to HEP including background suppression for H → τ + τ - at the LHC with several different kernel functions. Performance benchmarking leads to the issue of generalisation of hyper-parameter selection. The avoidance of fine tuning (over training or over fitting) in MVA hyper-parameter optimisation, i.e. the ability to ensure generalised performance of an MVA that is independent of the training, validation and test samples, is of utmost importance. We discuss this issue and compare and contrast performance of hold-out and k-fold cross-validation. We have extended the SVM functionality and introduced tools to facilitate cross validation in TMVA and present results based on these improvements.
NASA Sea Ice and Snow Validation Program for the DMSP SSM/I: NASA DC-8 flight report
NASA Technical Reports Server (NTRS)
Cavalieri, D. J.
1988-01-01
In June 1987 a new microwave sensor called the Special Sensor Microwave Imager (SSM/I) was launched as part of the Defense Meteorological Satellite Program (DMSP). In recognition of the importance of this sensor to the polar research community, NASA developed a program to acquire the data, to convert the data into sea ice parameters, and finally to validate and archive both the SSM/I radiances and the derived sea ice parameters. Central to NASA's sea ice validation program was a series of SSM/I aircraft underflights with the NASA DC-8 airborne Laboratory. The mission (the Arctic '88 Sea Ice Mission) was completed in March 1988. This report summarizes the mission and includes a summary of aircraft instrumentation, coordination with participating Navy aircraft, flight objectives, flight plans, data collected, SSM/I orbits for each day during the mission, and lists several piggyback experiments supported during this mission.
Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin
2015-01-01
Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.
Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.
García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L
2002-01-30
NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.
Shiao, S Pamela K; Grayson, James; Lie, Amanda; Yu, Chong Ho
2018-06-20
To personalize nutrition, the purpose of this study was to examine five key genes in the folate metabolism pathway, and dietary parameters and related interactive parameters as predictors of colorectal cancer (CRC) by measuring the healthy eating index (HEI) in multiethnic families. The five genes included methylenetetrahydrofolate reductase ( MTHFR ) 677 and 1298, methionine synthase ( MTR ) 2756, methionine synthase reductase ( MTRR 66), and dihydrofolate reductase ( DHFR ) 19bp , and they were used to compute a total gene mutation score. We included 53 families, 53 CRC patients and 53 paired family friend members of diverse population groups in Southern California. We measured multidimensional data using the ensemble bootstrap forest method to identify variables of importance within domains of genetic, demographic, and dietary parameters to achieve dimension reduction. We then constructed predictive generalized regression (GR) modeling with a supervised machine learning validation procedure with the target variable (cancer status) being specified to validate the results to allow enhanced prediction and reproducibility. The results showed that the CRC group had increased total gene mutation scores compared to the family members ( p < 0.05). Using the Akaike's information criterion and Leave-One-Out cross validation GR methods, the HEI was interactive with thiamine (vitamin B1), which is a new finding for the literature. The natural food sources for thiamine include whole grains, legumes, and some meats and fish which HEI scoring included as part of healthy portions (versus limiting portions on salt, saturated fat and empty calories). Additional predictors included age, as well as gender and the interaction of MTHFR 677 with overweight status (measured by body mass index) in predicting CRC, with the cancer group having more men and overweight cases. The HEI score was significant when split at the median score of 77 into greater or less scores, confirmed through the machine-learning recursive tree method and predictive modeling, although an HEI score of greater than 80 is the US national standard set value for a good diet. The HEI and healthy eating are modifiable factors for healthy living in relation to dietary parameters and cancer prevention, and they can be used for personalized nutrition in the precision-based healthcare era.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
Glass Transition Temperature- and Specific Volume- Composition Models for Tellurite Glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, Brian J.; Vienna, John D.
This report provides models for predicting composition-properties for tellurite glasses, namely specific gravity and glass transition temperature. Included are the partial specific coefficients for each model, the component validity ranges, and model fit parameters.
Yip, T C-F; Ma, A J; Wong, V W-S; Tse, Y-K; Chan, H L-Y; Yuen, P-C; Wong, G L-H
2017-08-01
Non-alcoholic fatty liver disease (NAFLD) affects 20%-40% of the general population in developed countries and is an increasingly important cause of hepatocellular carcinoma. Electronic medical records facilitate large-scale epidemiological studies, existing NAFLD scores often require clinical and anthropometric parameters that may not be captured in those databases. To develop and validate a laboratory parameter-based machine learning model to detect NAFLD for the general population. We randomly divided 922 subjects from a population screening study into training and validation groups; NAFLD was diagnosed by proton-magnetic resonance spectroscopy. On the basis of machine learning from 23 routine clinical and laboratory parameters after elastic net regulation, we evaluated the logistic regression, ridge regression, AdaBoost and decision tree models. The areas under receiver-operating characteristic curve (AUROC) of models in validation group were compared. Six predictors including alanine aminotransferase, high-density lipoprotein cholesterol, triglyceride, haemoglobin A 1c , white blood cell count and the presence of hypertension were selected. The NAFLD ridge score achieved AUROC of 0.87 (95% CI 0.83-0.90) and 0.88 (0.84-0.91) in the training and validation groups respectively. Using dual cut-offs of 0.24 and 0.44, NAFLD ridge score achieved 92% (86%-96%) sensitivity and 90% (86%-93%) specificity with corresponding negative and positive predictive values of 96% (91%-98%) and 69% (59%-78%), and 87% of overall accuracy among 70% of classifiable subjects in the validation group; 30% of subjects remained indeterminate. NAFLD ridge score is a simple and robust reference comparable to existing NAFLD scores to exclude NAFLD patients in epidemiological studies. © 2017 John Wiley & Sons Ltd.
Chen, Ling; Luo, Dan; Yu, Xiajuan; Jin, Mei; Cai, Wenzhi
2018-05-12
The aim of this study was to develop and validate a predictive tool that combining pelvic floor ultrasound parameters and clinical factors for stress urinary incontinence during pregnancy. A total of 535 women in first or second trimester were included for an interview and transperineal ultrasound assessment from two hospitals. Imaging data sets were analyzed offline to assess for bladder neck vertical position, urethra angles (α, β, and γ angles), hiatal area and bladder neck funneling. All significant continuous variables at univariable analysis were analyzed by receiver-operating characteristics. Three multivariable logistic models were built on clinical factor, and combined with ultrasound parameters. The final predictive model with best performance and fewest variables was selected to establish a nomogram. Internal and external validation of the nomogram were performed by both discrimination represented by C-index and calibration measured by Hosmer-Lemeshow test. A decision curve analysis was conducted to determine the clinical utility of the nomogram. After excluding 14 women with invalid data, 521 women were analyzed. β angle, γ angle and hiatal area had limited predictive value for stress urinary incontinence during pregnancy, with area under curves of 0.558-0.648. The final predictive model included body mass index gain since pregnancy, constipation, previous delivery mode, β angle at rest, and bladder neck funneling. The nomogram based on the final model showed good discrimination with a C-index of 0.789 and satisfactory calibration (P=0.828), both of which were supported by external validation. Decision curve analysis showed that the nomogram was clinical useful. The nomogram incorporating both the pelvic floor ultrasound parameters and clinical factors has been validated to show good discrimination and calibration, and could be an important tool for stress urinary incontinence risk prediction at an early stage of pregnancy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Kapiriri, Lydia
2017-06-19
While there have been efforts to develop frameworks to guide healthcare priority setting; there has been limited focus on evaluation frameworks. Moreover, while the few frameworks identify quality indicators for successful priority setting, they do not provide the users with strategies to verify these indicators. Kapiriri and Martin (Health Care Anal 18:129-147, 2010) developed a framework for evaluating priority setting in low and middle income countries. This framework provides BOTH parameters for successful priority setting and proposes means of their verification. Before its use in real life contexts, this paper presents results from a validation process of the framework. The framework validation involved 53 policy makers and priority setting researchers at the global, national and sub-national levels (in Uganda). They were requested to indicate the relative importance of the proposed parameters as well as the feasibility of obtaining the related information. We also pilot tested the proposed means of verification. Almost all the respondents evaluated all the parameters, including the contextual factors, as 'very important'. However, some respondents at the global level thought 'presence of incentives to comply', 'reduced disagreements', 'increased public understanding,' 'improved institutional accountability' and 'meeting the ministry of health objectives', which could be a reflection of their levels of decision making. All the proposed means of verification were assessed as feasible with the exception of meeting observations which would require an insider. These findings results were consistent with those obtained from the pilot testing. These findings are relevant to policy makers and researchers involved in priority setting in low and middle income countries. To the best of our knowledge, this is one of the few initiatives that has involved potential users of a framework (at the global and in a Low Income Country) in its validation. The favorable validation of all the parameters at the national and sub-national levels implies that the framework has potential usefulness at those levels, as is. The parameters that were disputed at the global level necessitate further discussion when using the framework at that level. The next step is to use the validated framework in evaluating actual priority setting at the different levels.
A Geographically Variable Water Quality Index Used in Oregon.
ERIC Educational Resources Information Center
Dunnette, D. A.
1979-01-01
Discusses the procedure developed in Oregon to formulate a valid water quality index which accounts for the specific conditions in the water body of interest. Parameters selected include oxygen depletion, BOD, eutrophication, dissolved substances, health hazards, and physical characteristics. (CS)
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
van de Streek, Jacco; Neumann, Marcus A
2010-10-01
This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.
Retrieval with Infrared Atmospheric Sounding Interferometer and Validation during JAIVEx
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Taylor, Jonathan P.; Schluessel, Peter; Strow, L. Larrabee; Mango, Stephen A.
2008-01-01
A state-of-the-art IR-only retrieval algorithm has been developed with an all-season-global EOF Physical Regression and followed by 1-D Var. Physical Iterative Retrieval for IASI, AIRS, and NAST-I. The benefits of this retrieval are to produce atmospheric structure with a single FOV horizontal resolution (approx. 15 km for IASI and AIRS), accurate profiles above the cloud (at least) or down to the surface, surface parameters, and/or cloud microphysical parameters. Initial case study and validation indicates that surface, cloud, and atmospheric structure (include TBL) are well captured by IASI and AIRS measurements. Coincident dropsondes during the IASI and AIRS overpasses are used to validate atmospheric conditions, and accurate retrievals are obtained with an expected vertical resolution. JAIVEx has provided the data needed to validate the retrieval algorithm and its products which allows us to assess the instrument ability and/or performance. Retrievals with global coverage are under investigation for detailed retrieval assessment. It is greatly desired that these products be used for testing the impact on Atmospheric Data Assimilation and/or Numerical Weather Prediction.
Breakdown parameter for kinetic modeling of multiscale gas flows.
Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao
2014-06-01
Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393
Korjus, Kristjan; Hebart, Martin N; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.
Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...
2014-01-01
This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less
NASA Astrophysics Data System (ADS)
Kratzke, Jonas; Rengier, Fabian; Weis, Christian; Beller, Carsten J.; Heuveline, Vincent
2016-04-01
Initiation and development of cardiovascular diseases can be highly correlated to specific biomechanical parameters. To examine and assess biomechanical parameters, numerical simulation of cardiovascular dynamics has the potential to complement and enhance medical measurement and imaging techniques. As such, computational fluid dynamics (CFD) have shown to be suitable to evaluate blood velocity and pressure in scenarios, where vessel wall deformation plays a minor role. However, there is a need for further validation studies and the inclusion of vessel wall elasticity for morphologies being subject to large displacement. In this work, we consider a fluid-structure interaction (FSI) model including the full elasticity equation to take the deformability of aortic wall soft tissue into account. We present a numerical framework, in which either a CFD study can be performed for less deformable aortic segments or an FSI simulation for regions of large displacement such as the aortic root and arch. Both of the methods are validated by means of an aortic phantom experiment. The computational results are in good agreement with 2D phase-contrast magnetic resonance imaging (PC-MRI) velocity measurements as well as catheter-based pressure measurements. The FSI simulation shows a characteristic vessel compliance effect on the flow field induced by the elasticity of the vessel wall, which the CFD model is not capable of. The in vitro validated FSI simulation framework can enable the computation of complementary biomechanical parameters such as the stress distribution within the vessel wall.
Kros, Johan M; Huizer, Karin; Hernández-Laín, Aurelio; Marucci, Gianluca; Michotte, Alex; Pollo, Bianca; Rushing, Elisabeth J; Ribalta, Teresa; French, Pim; Jaminé, David; Bekka, Nawal; Lacombe, Denis; van den Bent, Martin J; Gorlia, Thierry
2015-06-10
With the rapid discovery of prognostic and predictive molecular parameters for glioma, the status of histopathology in the diagnostic process should be scrutinized. Our project aimed to construct a diagnostic algorithm for gliomas based on molecular and histologic parameters with independent prognostic values. The pathology slides of 636 patients with gliomas who had been included in EORTC 26951 and 26882 trials were reviewed using virtual microscopy by a panel of six neuropathologists who independently scored 18 histologic features and provided an overall diagnosis. The molecular data for IDH1, 1p/19q loss, EGFR amplification, loss of chromosome 10 and chromosome arm 10q, gain of chromosome 7, and hypermethylation of the promoter of MGMT were available for some of the cases. The slides were divided in discovery (n = 426) and validation sets (n = 210). The diagnostic algorithm resulting from analysis of the discovery set was validated in the latter. In 66% of cases, consensus of overall diagnosis was present. A diagnostic algorithm consisting of two molecular markers and one consensus histologic feature was created by conditional inference tree analysis. The order of prognostic significance was: 1p/19q loss, EGFR amplification, and astrocytic morphology, which resulted in the identification of four diagnostic nodes. Validation of the nodes in the validation set confirmed the prognostic value (P < .001). We succeeded in the creation of a timely diagnostic algorithm for anaplastic glioma based on multivariable analysis of consensus histopathology and molecular parameters. © 2015 by American Society of Clinical Oncology.
Boison, Joe O; Asea, Philip A; Matus, Johanna L
2012-08-01
A new and sensitive multi-residue method (MRM) with detection by LC-MS/MS was developed and validated for the screening, determination, and confirmation of residues of 7 nitroimidazoles and 3 of their metabolites in turkey muscle tissues at concentrations ≥ 0.05 ng/g. The compounds were extracted into a solvent with an alkali salt. Sample clean-up and concentration was then done by solid-phase extraction (SPE) and the compounds were quantified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The characteristic parameters including repeatability, selectivity, ruggedness, stability, level of quantification, and level of confirmation for the new method were determined. Method validation was achieved by independent verification of the parameters measured during method characterization. The seven nitroimidazoles included are metronidazole (MTZ), ronidazole (RNZ), dimetridazole (DMZ), tinidazole (TNZ), ornidazole (ONZ), ipronidazole (IPR), and carnidazole (CNZ). It was discovered during the single laboratory validation of the method that five of the seven nitroimidazoles (i.e. metronidazole, dimetridazole, tinidazole, ornidazole and ipronidazole) and the 3 metabolites (1-(2-hydroxyethyl)-2-hydroxymethyl-5-nitroimidazole (MTZ-OH), 2-hydroxymethyl-1-methyl-5-nitroimidazole (HMMNI, the common metabolite of ronidazole and dimetridazole), and 1-methyl-2-(2'-hydroxyisopropyl)-5-nitroimidazole (IPR-OH) included in this study could be detected, confirmed, and quantified accurately whereas RNZ and CNZ could only be detected and confirmed but not accurately quantified. © Her Majesty the Queen in Right of Canada as Represented by the Minister of Agriculture and Agri-food Canada 2012.
Chua, Michael E; Tanseco, Patrick P; Mendoza, Jonathan S; Castillo, Josefino C; Morales, Marcelino L; Luna, Saturnino L
2015-04-01
To configure and validate a novel prostate disease nomogram providing prostate biopsy outcome probabilities from a prospective study correlating clinical indicators and diagnostic parameters among Filipino adult male with elevated serum total prostate specific antigen (PSA) level. All men with an elevated serum total PSA underwent initial prostate biopsy at our institution from January 2011 to August 2014 were included. Clinical indicators, diagnostic parameters, which include PSA level and PSA-derivatives, were collected as predictive factors for biopsy outcome. Multiple logistic-regression analysis involving a backward elimination selection procedure was used to select independent predictors. A nomogram was developed to calculate the probability of the biopsy outcomes. External validation of the nomogram was performed using separate data set from another center for determination of sensitivity and specificity. A receiver-operating characteristic (ROC) curve was used to assess the accuracy in predicting differential biopsy outcome. Total of 552 patients was included. One hundred and ninety-one (34.6%) patients had benign prostatic hyperplasia, and 165 (29.9%) had chronic prostatitis. The remaining 196 (35.5%) patients had prostate adenocarcinoma. The significant independent variables used to predict biopsy outcome were age, family history of prostate cancer, prior antibiotic intake, PSA level, PSA-density, PSA-velocity, echogenic findings on ultrasound, and DRE status. The areas under the receiver-operating characteristic curve for prostate cancer using PSA alone and the nomogram were 0.688 and 0.804, respectively. The nomogram configured based on routinely available clinical parameters, provides high predictive accuracy with good performance characteristics in predicting the prostate biopsy outcome such as presence of prostate cancer, high Gleason prostate cancer, benign prostatic hyperplasia, and chronic prostatitis.
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hahn, Seokyung; Moon, Min Kyong; Park, Kyong Soo; Cho, Young Min
2016-01-01
Background Various diabetes risk scores composed of non-laboratory parameters have been developed, but only a few studies performed cross-validation of these scores and a comparison with laboratory parameters. We evaluated the performance of diabetes risk scores composed of non-laboratory parameters, including a recently published Korean risk score (KRS), and compared them with laboratory parameters. Methods The data of 26,675 individuals who visited the Seoul National University Hospital Healthcare System Gangnam Center for a health screening program were reviewed for cross-sectional validation. The data of 3,029 individuals with a mean of 6.2 years of follow-up were reviewed for longitudinal validation. The KRS and 16 other risk scores were evaluated and compared with a laboratory prediction model developed by logistic regression analysis. Results For the screening of undiagnosed diabetes, the KRS exhibited a sensitivity of 81%, a specificity of 58%, and an area under the receiver operating characteristic curve (AROC) of 0.754. Other scores showed AROCs that ranged from 0.697 to 0.782. For the prediction of future diabetes, the KRS exhibited a sensitivity of 74%, a specificity of 54%, and an AROC of 0.696. Other scores had AROCs ranging from 0.630 to 0.721. The laboratory prediction model composed of fasting plasma glucose and hemoglobin A1c levels showed a significantly higher AROC (0.838, P < 0.001) than the KRS. The addition of the KRS to the laboratory prediction model increased the AROC (0.849, P = 0.016) without a significant improvement in the risk classification (net reclassification index: 4.6%, P = 0.264). Conclusions The non-laboratory risk scores, including KRS, are useful to estimate the risk of undiagnosed diabetes but are inferior to the laboratory parameters for predicting future diabetes. PMID:27214034
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Validity of a smartphone protractor to measure sagittal parameters in adult spinal deformity.
Kunkle, William Aaron; Madden, Michael; Potts, Shannon; Fogelson, Jeremy; Hershman, Stuart
2017-10-01
Smartphones have become an integral tool in the daily life of health-care professionals (Franko 2011). Their ease of use and wide availability often make smartphones the first tool surgeons use to perform measurements. This technique has been validated for certain orthopedic pathologies (Shaw 2012; Quek 2014; Milanese 2014; Milani 2014), but never to assess sagittal parameters in adult spinal deformity (ASD). This study was designed to assess the validity, reproducibility, precision, and efficiency of using a smartphone protractor application to measure sagittal parameters commonly measured in ASD assessment and surgical planning. This study aimed to (1) determine the validity of smartphone protractor applications, (2) determine the intra- and interobserver reliability of smartphone protractor applications when used to measure sagittal parameters in ASD, (3) determine the efficiency of using a smartphone protractor application to measure sagittal parameters, and (4) elucidate whether a physician's level of experience impacts the reliability or validity of using a smartphone protractor application to measure sagittal parameters in ASD. An experimental validation study was carried out. Thirty standard 36″ standing lateral radiographs were examined. Three separate measurements were performed using a marker and protractor; then at a separate time point, three separate measurements were performed using a smartphone protractor application for all 30 radiographs. The first 10 radiographs were then re-measured two more times, for a total of three measurements from both the smartphone protractor and marker and protractor. The parameters included lumbar lordosis, pelvic incidence, and pelvic tilt. Three raters performed all measurements-a junior level orthopedic resident, a senior level orthopedic resident, and a fellowship-trained spinal deformity surgeon. All data, including the time to perform the measurements, were recorded, and statistical analysis was performed to determine intra- and interobserver reliability, as well as accuracy, efficiency, and precision. Statistical analysis using the intra- and interclass correlation coefficient was calculated using R (version 3.3.2, 2016) to determine the degree of intra- and interobserver reliability. High rates of intra- and interobserver reliability were observed between the junior resident, senior resident, and attending surgeon when using the smartphone protractor application as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.874 respectively. High rates of inter- and intraobserver reliability were also seen between the junior resident, senior resident, and attending surgeon when a marker and protractor were used as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.807 respectively. The lumbar lordosis, pelvic incidence, and pelvic tilt values were accurately measured by all three raters, with excellent inter- and intra-class correlation coefficient values. When the first 10 radiographs were re-measured at different time points, a high degree of precision was noted. Measurements performed using the smartphone application were consistently faster than using a marker and protractor-this difference reached statistical significance of p<.05. Adult spinal deformity radiographic parameters can be measured accurately, precisely, reliably, and more efficiently using a smartphone protractor application than with a standard protractor and wax pencil. A high degree of intra- and interobserver reliability was seen between the residents and attending surgeon, indicating measurements made with a smartphone protractor are unaffected by an observer's level of experience. As a result, smartphone protractors may be used when planning ASD surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Kubitary, A; Alsaleh, M A
2018-03-01
This study aimed to validate the Arabic version of the two-question Quick Inventory of Depression (QID-2-Ar) in multiple sclerosis (MS) patients living in Syria during the war. A total of 100 Syrian MS patients, aged 18-60 years, were recruited at Damascus Hospital and Ibn Al-Nafees Hospital to validate the QID-2-Ar, including analyses of its screening test parameters and its construct validity. The QID-2-Ar screening parameters for depression tested very positively, and its construct validity was also favorable (P<0.01). The QID-2-Ar is a good screening test for detecting depression. Using a threshold score of ≥1 rather than 2 resulted in more depressed patients being correctly identified. The Arabic version of the QID-2-Ar also has highly favorable psychometric properties. It is valid for assessing depression, especially the two main depressive symptoms (depressive mood and anhedonia) listed in DSM-V. This is a useful tool for researchers and practitioners, and a threshold score of 2 on the QID-2-Ar is recommended to be more certain that all those with depression are detected without having to use a complete depression questionnaire such as the Beck Depression Inventory (BDI)-II. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Shen, Minxue; Hu, Ming; Sun, Zhenqiu
2017-01-01
Objectives To develop and validate brief scales to measure common emotional and behavioural problems among adolescents in the examination-oriented education system and collectivistic culture of China. Setting Middle schools in Hunan province. Participants 5442 middle school students aged 11–19 years were sampled. 4727 valid questionnaires were collected and used for validation of the scales. The final sample included 2408 boys and 2319 girls. Primary and secondary outcome measures The tools were assessed by the item response theory, classical test theory (reliability and construct validity) and differential item functioning. Results Four scales to measure anxiety, depression, study problem and sociality problem were established. Exploratory factor analysis showed that each scale had two solutions. Confirmatory factor analysis showed acceptable to good model fit for each scale. Internal consistency and test–retest reliability of all scales were above 0.7. Item response theory showed that all items had acceptable discrimination parameters and most items had appropriate difficulty parameters. 10 items demonstrated differential item functioning with respect to gender. Conclusions Four brief scales were developed and validated among adolescents in middle schools of China. The scales have good psychometric properties with minor differential item functioning. They can be used in middle school settings, and will help school officials to assess the students’ emotional/behavioural problems. PMID:28062469
Nonimaging detectors in drug development and approval.
Wagner, H N
2001-07-01
Regulatory applications for imaging biomarkers will expand in proportion to the validation of specific parameters as they apply to individual questions in the management of disease. This validation is likely to be applicable only to a particular class of drug or a single mechanism of action. Awareness among the world's regulatory authorities of the potential for these emerging technologies is high, but so is the cost to the sponsor (including the logistics of including images in a dossier), and therefore the pharmaceutical industry must evaluate carefully the potential benefit of each technology for its drug development programs, just as the authorities must consider carefully the extent to which the method is valid for the use to which the applicant has put it. For well-characterized tracer systems, it may be possible to design inexpensive cameras that make rapid assessments.
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.
2010-01-01
We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.
NASA Astrophysics Data System (ADS)
Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.
2015-12-01
Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes.
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel
2015-01-01
A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 gVS/Ld. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Experience of the JPL Exploratory Data Analysis Team at validating HIRS2/MSU cloud parameters
NASA Technical Reports Server (NTRS)
Kahn, Ralph; Haskins, Robert D.; Granger-Gallegos, Stephanie; Pursch, Andrew; Delgenio, Anthony
1992-01-01
Validation of the HIRS2/MSU cloud parameters began with the cloud/climate feedback problem. The derived effective cloud amount is less sensitive to surface temperature for higher clouds. This occurs because as the cloud elevation increases, the difference between surface temperature and cloud temperature increases, so only a small change in cloud amount is needed to effect a large change in radiance at the detector. By validating the cloud parameters it is meant 'developing a quantitative sense for the physical meaning of the measured parameters', by: (1) identifying the assumptions involved in deriving parameters from the measured radiances, (2) testing the input data and derived parameters for statistical error, sensitivity, and internal consistency, and (3) comparing with similar parameters obtained from other sources using other techniques.
Simulating environmental and psychological acoustic factors of the operating room.
Bennett, Christopher L; Dudaryk, Roman; Ayers, Andrew L; McNeer, Richard R
2015-12-01
In this study, an operating room simulation environment was adapted to include quadraphonic speakers, which were used to recreate a composed clinical soundscape. To assess validity of the composed soundscape, several acoustic parameters of this simulated environment were acquired in the presence of alarms only, background noise only, or both. These parameters were also measured for comparison from size-matched operating rooms at Jackson Memorial Hospital. The parameters examined included sound level, reverberation time, and predictive metrics of speech intelligibility in quiet and noise. It was found that the sound levels and acoustic parameters were comparable between the simulated environment and the actual operating rooms. The impact of the background noise on the perception of medical alarms was then examined, and was found to have little impact on the audibility of the alarms. This study is a first in kind report of a comparison between the environmental and psychological acoustical parameters of a hospital simulation environment and actual operating rooms.
Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS
Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo
2016-01-01
This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health. PMID:27766178
de Oliveira, Anapolino Macedo; Fonseca, Antônio Augusto; Camargos, Marcelo Fernandes; Orzil, Lívia Maria; Laguardia-Nascimento, Mateus; Oliveira, Anna Gabriella Guimarães; Rodrigues, Jacqueline Gomes; Sales, Mariana Lázaro; de Oliveira, Tatiana Flávia Pinheiro; de Melo, Cristiano Barros
2018-07-01
Vesicular stomatitis is an infectious disease that occurs mainly in countries of the Western Hemisphere and affects cattle, swine and horses. The clinical symptoms in cattle and swine are similar to foot-and-mouth disease and include vesicular ulceration of the tongue and mouth. The disease requires a rapid and accurate differential diagnosis, aiming for immediate implementation of control measures. The objective of the present study was to develop and perform validation tests of multiplex RT-qPCR(s) for the detection of RNA from Alagoas vesiculovirus, considering the parameters of sensitivity and analytical specificity, analytical performance (repeatability and reproducibility criteria) and the uncertainty of the measurement. The threshold cycle values obtained in triplicate from each sample were evaluated by considering the variations between days, analysts and equipment in an analysis of variance aimed at determining the variances of repeatability and reproducibility. The results showed that RT-qPCRs had excellent sensitivity and specificity in the detection of RNA of the Alagoas vesiculovirus. The validation parameters showed low coefficients of variation and were equivalent to those found in other validation studies, indicating that the tests presented excellent repeatability and reproducibility. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluation Of The MODIS-VIIRS Land Surface Reflectance Fundamental Climate Data Record.
NASA Astrophysics Data System (ADS)
Roger, J. C.; Vermote, E.; Skakun, S.; Murphy, E.; Holben, B. N.; Justice, C. O.
2016-12-01
The land surface reflectance is a fundamental climate data record at the basis of the derivation of other climate data records (Albedo, LAI/Fpar, Vegetation indices) and has been recognized as a key parameter in the understanding of the land-surface-climate processes. Here, we present the validation of the Land surface reflectance used for MODIS and VIIRS data. This methodology uses the 6SV Code and data from the AERONET network. The first part was to define a protocol to use the AERONET data. To correctly take into account the aerosol model, we used the aerosol microphysical properties provided by the AERONET network including size-distribution (%Cf, %Cc, rf, rc, σr, σc), complex refractive indices and sphericity. Over the 670 available AERONET sites, we selected 230 sites with sufficient data. To be useful for validation, the aerosol model should be readily available anytime, which is rarely the case. We then used regressions for each microphysical parameter using the aerosol optical thickness at 440nm and the Angström coefficient as parameters. Comparisons with the AERONET dataset give good APU (Accuracy-Precision-Uncertainties) for each parameter. The second part of the study relies on the theoretical land surface retrieval. We generated TOA synthetic data using aerosol models from AERONET and determined APU on the surface reflectance retrieval while applying the MODIS and VIRRS Atmospheric correction software. Over 250 AERONET sites, the global uncertainties are for MODIS band 1 (red) is always lower than 0.0015 (when surface reflectance is > 0.04). This very good result shows the validity of our reference. Then, we used this reference for validating the MODIS and VIIRS surface reflectance products. The overall accuracy clearly reaches specifications. Finally, we will present an error budget of the surface reflectance retrieval. Indeed, to better understand how to improve the methodology, we defined an exhaustive error budget. We included all inputs i.e. sensor, calibration, aerosol properties, atmospheric conditions… This latter work provides a lot of information, such as the aerosol optical thickness obviously drives the uncertainties of the retrieval, the absorption and the volume concentration of the fine aerosol mode have an important impact as well…
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... variable testing parameters described in this section which may be used in the validation study. The conditions demonstrated in the validation study for these variables shall become the required conditions for.... During the validation study, use the same ratio of contaminated surface area to soak solvent volume as...
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... variable testing parameters described in this section which may be used in the validation study. The conditions demonstrated in the validation study for these variables shall become the required conditions for.... During the validation study, use the same ratio of contaminated surface area to soak solvent volume as...
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... variable testing parameters described in this section which may be used in the validation study. The conditions demonstrated in the validation study for these variables shall become the required conditions for.... During the validation study, use the same ratio of contaminated surface area to soak solvent volume as...
A New Look at Bias in Aptitude Tests.
ERIC Educational Resources Information Center
Scheuneman, Janice Dowd
1981-01-01
Statistical bias in measurement and ethnic-group bias in testing are discussed, reviewing predictive and construct validity studies. Item bias is reconceptualized to include distance of item content from respondent's experience. Differing values of mean and standard deviation for bias parameter are analyzed in a simulation. References are…
Saraf, Sanatan; Mathew, Thomas; Roy, Anindya
2015-01-01
For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.
A model for flexi-bar to evaluate intervertebral disc and muscle forces in exercises.
Abdollahi, Masoud; Nikkhoo, Mohammad; Ashouri, Sajad; Asghari, Mohsen; Parnianpour, Mohamad; Khalaf, Kinda
2016-10-01
This study developed and validated a lumped parameter model for the FLEXI-BAR, a popular training instrument that provides vibration stimulation. The model which can be used in conjunction with musculoskeletal-modeling software for quantitative biomechanical analyses, consists of 3 rigid segments, 2 torsional springs, and 2 torsional dashpots. Two different sets of experiments were conducted to determine the model's key parameters including the stiffness of the springs and the damping ratio of the dashpots. In the first set of experiments, the free vibration of the FLEXI-BAR with an initial displacement at its end was considered, while in the second set, forced oscillations of the bar were studied. The properties of the mechanical elements in the lumped parameter model were derived utilizing a non-linear optimization algorithm which minimized the difference between the model's prediction and the experimental data. The results showed that the model is valid (8% error) and can be used for simulating exercises with the FLEXI-BAR for excitations in the range of the natural frequency. The model was then validated in combination with AnyBody musculoskeletal modeling software, where various lumbar disc, spinal muscles and hand muscles forces were determined during different FLEXI-BAR exercise simulations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh
2015-01-15
Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowlymore » biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.« less
Is the smile line a valid parameter for esthetic evaluation? A systematic literature review.
Passia, Nicole; Blatz, Markus; Strub, Jörg Rudolf
2011-01-01
The "smile line" is commonly used as a parameter to evaluate and categorize a person's smile. This systematic literature review assessed the existing evidence on the validity and universal applicability of this parameter. The latter was evaluated based on studies on smile perception by orthodontists, general clinicians, and laypeople. A review of the literature published between October 1973 and January 2010 was conducted with the electronic database Pubmed and the search terms "smile," "smile line," "smile arc," and "smile design." The search yielded 309 articles, of which nine studies were included based on the selection criteria. The selected studies typically correlate the smile line with the position of the upper lip during a smile while, on average, 75 to 100% of the maxillary anterior teeth are exposed. A virtual line that connects the incisal edges of the maxillary anterior teeth commonly follows the upper border of the lower lip. Average and parallel smile lines are most common, influenced by the age and gender of a person. Orthodontists, general clinicians, and laypeople have similar preferences and rate average smile lines as most attractive. The smile line is a valid tool to assess the esthetic appearance of a smile. It can be applied universally as clinicians and laypersons perceive and judge it similarly.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
Simão, Talita Prado; Lopes Chaves, Erika de Cássia; Campos de Carvalho, Emília; Nogueira, Denismar Alves; Carvalho, Camila Csizmar; Ku, Ya-Li; Iunes, Denise Hollanda
2016-01-01
To culturally adapt and test the psychometric properties of the Brazilian version of the Spiritual Distress Scale. In Brazil, there is currently a lack of validated instruments that assess the spiritual dimension, which includes the spiritual distress phenomenon that can be experienced at different moments in a person's life. This can include times when a person is affected by a disease such as cancer, which occurs suddenly and causes significant life changes. Methodological and cross-sectional study. Cultural adaptation of the Spiritual Distress Scale was performed using translation and back-translation stages, evaluation of cultural equivalence, committee review and pretesting. An interview using the Brazilian version of the scale was conducted with 170 patients in a cancer treatment unit of a charitable general hospital (not state funded). The following psychometric properties were evaluated: construct validity (divergence and factor analysis) and internal consistency/reliability (Cronbach's α and Kappa). Reliability analysis in the intra- and inter-rater phase showed that more than half of the items had Kappa values > 0·75. A correlation between the Spiritual Well-Being Scale and the Spiritual Distress Scale was found. Overall, the Spiritual Distress Scale showed a Cronbach's α of 0·87, with three of its four domains showing significant parameters. The Brazilian version of the Spiritual Distress Scale proved to be a reliable, valid and efficient instrument that is capable of assessing spiritual distress. The Brazilian Spiritual Distress Scale presented reliability and validity parameters that correspond to the original English version of the scale. The existence of an internationally validated instrument that assesses spiritual distress will assist healthcare professionals and researchers in recognising this phenomenon in clinical practice. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Adalarasan, R.; Santhanakumar, M.
2015-01-01
In the present work, yield strength, ultimate strength and micro-hardness of the lap joints formed with Al 6061 alloy sheets by using the processes of Tungsten Inert Gas (TIG) welding and Metal Inert Gas (MIG) welding were studied for various combinations of the welding parameters. The parameters taken for study include welding current, voltage, welding speed and inert gas flow rate. Taguchi's L9 orthogonal array was used to conduct the experiments and an integrated technique of desirability grey relational analysis was disclosed for optimizing the welding parameters. The ignored robustness in desirability approach is compensated by the grey relational approach to predict the optimal setting of input parameters for the TIG and MIG welding processes which were validated through the confirmation experiments.
van Sonsbeek, Gerda R; van der Kolk, Johannes H; van Leeuwen, Johannes P T M; Schaftenaar, Willem
2011-05-01
Hypocalcemia is a well known cause of dystocia in animals, including elephants in captivity. In order to study calcium metabolism in elephants, it is of utmost importance to use properly validated assays, as these might be prone to specific matrix effects in elephant blood. The aim of the current study was to conduct preliminary work for validation of various parameters involved in calcium metabolism in both blood and urine of captive elephants. Basal values of these parameters were compared between Asian elephants (Elephas maximus) and African elephants (Loxodonta africana). Preliminary testing of total calcium, inorganic phosphorus, and creatinine appeared valid for use in plasma and creatinine in urine in both species. Furthermore, measurements of bone alkaline phosphatase and N-terminal telopeptide of type I collagen appeared valid for use in Asian elephants. Mean heparinized plasma ionized calcium concentration and pH were not significantly affected by 3 cycles of freezing and thawing. Storage at 4 °C, room temperature, and 37 °C for 6, 12, and 24 hr did not alter the heparinized plasma ionized calcium concentration in Asian elephants. The following linear regression equation using pH (range: 6.858-7.887) and ionized calcium concentration in heparinized plasma was utilized: iCa(7.4) (mmol/l) = -2.1075 + 0.3130·pH(actual) + 0.8296·iCa(actual) (mmol/l). Mean basal values for pH and plasma in Asian elephant whole blood were 7.40 ± 0.048 and 7.49 ± 0.077, respectively. The urinary specific gravity and creatinine concentrations in both Asian and African elephants were significantly correlated and both were significantly lower in Asian elephants. © 2011 The Author(s)
NASA Astrophysics Data System (ADS)
Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula
2016-12-01
We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use efficiency. When compared to the seasonal tuning, the daily tuning is worse on the seasonal scale. However, daily parametrisation reproduced the observations for average diurnal cycle best, except for the GPP for Sodankylä validation period, where half-hourly tuned parameters were better. In general, the daily tuning provided the largest reduction in model-data mismatch. The models response to drought was unaffected by our parametrisations and further studies are needed into enhancing the dry response in JSBACH.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish
2011-08-01
We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data.
Scaling of hydrologic and erosion parameters derived from rainfall simulation
NASA Astrophysics Data System (ADS)
Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher
2010-05-01
Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage
Lin, Ping-Chang; Reiter, David A.; Spencer, Richard G.
2010-01-01
MRI is increasingly used to evaluate cartilage in tissue constructs, explants, and animal and patient studies. However, while mean values of MR parameters, including T1, T2, magnetization transfer rate km, apparent diffusion coefficient ADC, and the dGEMRIC-derived fixed charge density, correlate with tissue status, the ability to classify tissue according to these parameters has not been explored. Therefore, the sensitivity and specificity with which each of these parameters was able to distinguish between normal and trypsin- degraded, and between normal and collagenase-degraded, cartilage explants were determined. Initial analysis was performed using a training set to determine simple group means to which parameters obtained from a validation set were compared. T1 and ADC showed the greatest ability to discriminate between normal and degraded cartilage. Further analysis with k-means clustering, which eliminates the need for a priori identification of sample status, generally performed comparably. Use of fuzzy c-means (FCM) clustering to define centroids likewise did not result in improvement in discrimination. Finally, a FCM clustering approach in which validation samples were assigned in a probabilistic fashion to control and degraded groups was implemented, reflecting the range of tissue characteristics seen with cartilage degradation. PMID:19705467
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false How do I establish a valid parameter range if I have chosen to continuously monitor parameters? 60.4410 Section 60.4410 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of...
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
Development of the Assessment of Belief Conflict in Relationship-14 (ABCR-14).
Kyougoku, Makoto; Teraoka, Mutsumi; Masuda, Noriko; Ooura, Mariko; Abe, Yasushi
2015-01-01
Nurses and other healthcare workers frequently experience belief conflict, one of the most important, new stress-related problems in both academic and clinical fields. In this study, using a sample of 1,683 nursing practitioners, we developed The Assessment of Belief Conflict in Relationship-14 (ABCR-14), a new scale that assesses belief conflict in the healthcare field. Standard psychometric procedures were used to develop and test the scale, including a qualitative framework concept and item-pool development, item reduction, and scale development. We analyzed the psychometric properties of ABCR-14 according to entropy, polyserial correlation coefficient, exploratory factor analysis, confirmatory factor analysis, average variance extracted, Cronbach's alpha, Pearson product-moment correlation coefficient, and multidimensional item response theory (MIRT). The results of the analysis supported a three-factor model consisting of 14 items. The validity and reliability of ABCR-14 was suggested by evidence from high construct validity, structural validity, hypothesis testing, internal consistency reliability, and concurrent validity. The result of the MIRT offered strong support for good item response of item slope parameters and difficulty parameters. However, the ABCR-14 Likert scale might need to be explored from the MIRT point of view. Yet, as mentioned above, there is sufficient evidence to support that ABCR-14 has high validity and reliability. The ABCR-14 demonstrates good psychometric properties for nursing belief conflict. Further studies are recommended to confirm its application in clinical practice.
Proposition of a Classification of Adult Patients with Hemiparesis in Chronic Phase.
Chantraine, Frédéric; Filipetti, Paul; Schreiber, Céline; Remacle, Angélique; Kolanowski, Elisabeth; Moissenet, Florent
2016-01-01
Patients who have developed hemiparesis as a result of a central nervous system lesion, often experience reduced walking capacity and worse gait quality. Although clinically, similar gait patterns have been observed, presently, no clinically driven classification has been validated to group these patients' gait abnormalities at the level of the hip, knee and ankle joints. This study has thus intended to put forward a new gait classification for adult patients with hemiparesis in chronic phase, and to validate its discriminatory capacity. Twenty-six patients with hemiparesis were included in this observational study. Following a clinical examination, a clinical gait analysis, complemented by a video analysis, was performed whereby participants were requested to walk spontaneously on a 10m walkway. A patient's classification was established from clinical examination data and video analysis. This classification was made up of three groups, including two sub-groups, defined with key abnormalities observed whilst walking. Statistical analysis was achieved on the basis of 25 parameters resulting from the clinical gait analysis in order to assess the discriminatory characteristic of the classification as displayed by the walking speed and kinematic parameters. Results revealed that the parameters related to the discriminant criteria of the proposed classification were all significantly different between groups and subgroups. More generally, nearly two thirds of the 25 parameters showed significant differences (p<0.05) between the groups and sub-groups. However, prior to being fully validated, this classification must still be tested on a larger number of patients, and the repeatability of inter-operator measures must be assessed. This classification enables patients to be grouped on the basis of key abnormalities observed whilst walking and has the advantage of being able to be used in clinical routines without necessitating complex apparatus. In the midterm, this classification may allow a decision-tree of therapies to be developed on the basis of the group in which the patient has been categorised.
Proposition of a Classification of Adult Patients with Hemiparesis in Chronic Phase
Filipetti, Paul; Remacle, Angélique; Kolanowski, Elisabeth
2016-01-01
Background Patients who have developed hemiparesis as a result of a central nervous system lesion, often experience reduced walking capacity and worse gait quality. Although clinically, similar gait patterns have been observed, presently, no clinically driven classification has been validated to group these patients’ gait abnormalities at the level of the hip, knee and ankle joints. This study has thus intended to put forward a new gait classification for adult patients with hemiparesis in chronic phase, and to validate its discriminatory capacity. Methods and Findings Twenty-six patients with hemiparesis were included in this observational study. Following a clinical examination, a clinical gait analysis, complemented by a video analysis, was performed whereby participants were requested to walk spontaneously on a 10m walkway. A patient’s classification was established from clinical examination data and video analysis. This classification was made up of three groups, including two sub-groups, defined with key abnormalities observed whilst walking. Statistical analysis was achieved on the basis of 25 parameters resulting from the clinical gait analysis in order to assess the discriminatory characteristic of the classification as displayed by the walking speed and kinematic parameters. Results revealed that the parameters related to the discriminant criteria of the proposed classification were all significantly different between groups and subgroups. More generally, nearly two thirds of the 25 parameters showed significant differences (p<0.05) between the groups and sub-groups. However, prior to being fully validated, this classification must still be tested on a larger number of patients, and the repeatability of inter-operator measures must be assessed. Conclusions This classification enables patients to be grouped on the basis of key abnormalities observed whilst walking and has the advantage of being able to be used in clinical routines without necessitating complex apparatus. In the midterm, this classification may allow a decision-tree of therapies to be developed on the basis of the group in which the patient has been categorised. PMID:27271533
Methodology for Software Reliability Prediction. Volume 2.
1987-11-01
The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement
ERIC Educational Resources Information Center
Bergner, Yoav; Droschler, Stefan; Kortemeyer, Gerd; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.
2012-01-01
We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a…
A summary and evaluation of semi-empirical methods for the prediction of helicopter rotor noise
NASA Technical Reports Server (NTRS)
Pegg, R. J.
1979-01-01
Existing prediction techniques are compiled and described. The descriptions include input and output parameter lists, required equations and graphs, and the range of validity for each part of the prediction procedures. Examples are provided illustrating the analysis procedure and the degree of agreement with experimental results.
Validation of balance-quality assessment using a modified bathroom scale.
Hewson, D J; Duchêne, J; Hogrel, J-Y
2015-02-01
The balance quality tester (BQT), based on a standard electronic bathroom scale has been developed in order to assess balance quality. The BQT includes automatic detection of the person to be tested by means of an infrared detector and bluetooth communication capability for remote assessment when linked to a long-distance communication device such as a mobile phone. The BQT was compared to a standard force plate for validity and agreement. The two most widely reported parameters in balance literature, the area of the centre of pressure (COP) displacement and the velocity of the COP displacement, were compared for 12 subjects, each of whom was tested on ten occasions on each of the 2 days. No significant differences were observed between the BQT and the force plate for either of the two parameters. In addition a high level of agreement was observed between both devices. The BQT is a valid device for remote assessment of balance quality, and could provide a useful tool for long-term monitoring of people with balance problems, particularly during home monitoring.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
Barros, Wilson; Gochberg, Daniel F.; Gore, John C.
2009-01-01
The description of the nuclear magnetic resonance magnetization dynamics in the presence of long-range dipolar interactions, which is based upon approximate solutions of Bloch–Torrey equations including the effect of a distant dipolar field, has been revisited. New experiments show that approximate analytic solutions have a broader regime of validity as well as dependencies on pulse-sequence parameters that seem to have been overlooked. In order to explain these experimental results, we developed a new method consisting of calculating the magnetization via an iterative formalism where both diffusion and distant dipolar field contributions are treated as integral operators incorporated into the Bloch–Torrey equations. The solution can be organized as a perturbative series, whereby access to higher order terms allows one to set better boundaries on validity regimes for analytic first-order approximations. Finally, the method legitimizes the use of simple analytic first-order approximations under less demanding experimental conditions, it predicts new pulse-sequence parameter dependencies for the range of validity, and clarifies weak points in previous calculations. PMID:19425789
Willis, Brian H; Riley, Richard D
2017-09-20
An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Utilizing the social media data to validate 'climate change' indices
NASA Astrophysics Data System (ADS)
Molodtsova, T.; Kirilenko, A.; Stepchenkova, S.
2013-12-01
Reporting the observed and modeled changes in climate to public requires the measures understandable by the general audience. E.g., the NASA GISS Common Sense Climate Index (Hansen et al., 1998) reports the change in climate based on six practically observable parameters such as the air temperature exceeding the norm by one standard deviation. The utility of the constructed indices for reporting climate change depends, however, on an assumption that the selected parameters are felt and connected with the changing climate by a non-expert, which needs to be validated. Dynamic discussion of climate change issues in social media may provide data for this validation. We connected the intensity of public discussion of climate change in social networks with regional weather variations for the territory of the USA. We collected the entire 2012 population of Twitter microblogging activity on climate change topic, accumulating over 1.8 million separate records (tweets) globally. We identified the geographic location of the tweets and associated the daily and weekly intensity of twitting with the following parameters of weather for these locations: temperature anomalies, 'hot' temperature anomalies, 'cold' temperature anomalies, heavy rain/snow events. To account for non-weather related events we included the articles on climate change from the 'prestige press', a collection of major newspapers. We found that the regional changes in parameters of weather significantly affect the number of tweets published on climate change. This effect, however, is short-lived and varies throughout the country. We found that in different locations different weather parameters had the most significant effect on climate change microblogging activity. Overall 'hot' temperature anomalies had significant influence on climate change twitting intensity.
Validating an Air Traffic Management Concept of Operation Using Statistical Modeling
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2013-01-01
Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
NASA Astrophysics Data System (ADS)
Kuschenerus, Mieke; Cullen, Robert
2016-08-01
To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.
Oerbekke, Michiel S; Stukstette, Mirelle J; Schütte, Kurt; de Bie, Rob A; Pisters, Martijn F; Vanwanseele, Benedicte
2017-01-01
The OpenGo seems promising to take gait analysis out of laboratory settings due to its capability of long-term measurements and mobility. However, the OpenGo's concurrent validity and reliability need to be assessed to determine if the instrument is suitable for validation in patient samples. Twenty healthy volunteers participated. Center of pressure data were collected under eyes open and closed conditions with participants performing unilateral stance trials on the gold standard (AMTI OR6-7 force plate) while wearing the OpenGo. Temporal gait data (stance time, gait cycle time, and cadence) were collected at a self-selected comfortable walking speed with participants performing test-retest trials on an instrumented treadmill while wearing the OpenGo. Validity was assessed using Bland-Altman plots. Reliability was assessed with Intraclass Correlation Coefficient (2,1) and smallest detectable changes were calculated. Negative means of differences were found in all measured parameters, illustrating lower scores for the OpenGo on average. The OpenGo showed negative upper limits of agreement in center of pressure parameters on the mediolateral axis. Temporal reliability ICCs ranged from 0.90-0.93. Smallest detectable changes for both stance times were 0.04 (left) and 0.05 (right) seconds, for gait cycle time 0.08s, and for cadence 4.5 steps per minute. The OpenGo is valid and reliable for the measurement of temporal gait parameters during walking. Measurements of center of pressure parameters during unilateral stance are not considered valid. The OpenGo seems a promising instrument for clinically screening and monitoring temporal gait parameters in patients, however validation in patient populations is needed. Copyright © 2016 Elsevier B.V. All rights reserved.
Choi, Yun Jeong; Jeoung, Jin Wook; Park, Ki Ho; Kim, Dong Myung
2016-03-01
To determine and validate the diagnostic ability of a linear discriminant function (LDF) based on retinal nerve fiber layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) thickness obtained using high-definition optical coherence tomography (Cirrus HD-OCT) for discriminating between healthy controls and early glaucoma subjects. We prospectively selected 214 healthy controls and 152 glaucoma subjects (teaching set) and another independent sample of 86 healthy controls and 71 glaucoma subjects (validating set). Two scans, including 1 macular and 1 peripapillary RNFL scan, were obtained. After calculating the LDF in the teaching set using the binary logistic regression analysis, receiver operating characteristic curves were plotted and compared between the OCT-provided parameters and LDF in the validating set. The proposed LDF was 16.529-(0.132×superior RNFL)-(0.064×inferior RNFL)+(0.039×12 o'clock RNFL)+(0.038×1 o'clock RNFL)+(0.084×superior GCIPL)-(0.144×minimum GCIPL). The highest area under the receiver operating characteristic (AUROC) curve was obtained for LDF in both sets (AUROC=0.95 and 0.96). In the validating set, the LDF showed significantly higher AUROC than the best RNFL (inferior RNFL=0.91) and GCIPL parameter (minimum GCIPL=0.88). The LDF yielded a sensitivity of 93.0% at a fixed specificity of 85.0%. The LDF showed better diagnostic ability for differentiating between healthy and early glaucoma subjects than individual OCT parameters. A classification algorithm based on the LDF can be used in the OCT analysis for glaucoma diagnosis.
Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2010-01-01
We are working on the development of a method for optimizing wide-field x-ray telescope mirror prescriptions, including polynomial coefficients, mirror shell relative displacements, and (assuming 4 focal plane detectors) detector placement and tilt that does not require a search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough that second order expansions are valid, we show that the performance at the detector surface can be expressed as a quadratic function of the parameters with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The best values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero. We describe the present status of this development effort.
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Labyrinth Seal Flutter Analysis and Test Validation in Support of Robust Rocket Engine Design
NASA Technical Reports Server (NTRS)
El-Aini, Yehia; Park, John; Frady, Greg; Nesman, Tom
2010-01-01
High energy-density turbomachines, like the SSME turbopumps, utilize labyrinth seals, also referred to as knife-edge seals, to control leakage flow. The pressure drop for such seals is order of magnitude higher than comparable jet engine seals. This is aggravated by the requirement of tight clearances resulting in possible unfavorable fluid-structure interaction of the seal system (seal flutter). To demonstrate these characteristics, a benchmark case of a High Pressure Oxygen Turbopump (HPOTP) outlet Labyrinth seal was studied in detail. First, an analytical assessment of the seal stability was conducted using a Pratt & Whitney legacy seal flutter code. Sensitivity parameters including pressure drop, rotor-to-stator running clearances and cavity volumes were examined and modeling strategies established. Second, a concurrent experimental investigation was undertaken to validate the stability of the seal at the equivalent operating conditions of the pump. Actual pump hardware was used to construct the test rig, also referred to as the (Flutter Rig). The flutter rig did not include rotational effects or temperature. However, the use of Hydrogen gas at high inlet pressure provided good representation of the critical parameters affecting flutter especially the speed of sound. The flutter code predictions showed consistent trends in good agreement with the experimental data. The rig test program produced a stability threshold empirical parameter that separated operation with and without flutter. This empirical parameter was used to establish the seal build clearances to avoid flutter while providing the required cooling flow metering. The calibrated flutter code along with the empirical flutter parameter was used to redesign the baseline seal resulting in a flutter-free robust configuration. Provisions for incorporation of mechanical damping devices were introduced in the redesigned seal to ensure added robustness
Validity of the Kinect for Gait Assessment: A Focused Review
Springer, Shmuel; Yogev Seligmann, Galit
2016-01-01
Gait analysis may enhance clinical practice. However, its use is limited due to the need for expensive equipment which is not always available in clinical settings. Recent evidence suggests that Microsoft Kinect may provide a low cost gait analysis method. The purpose of this report is to critically evaluate the literature describing the concurrent validity of using the Kinect as a gait analysis instrument. An online search of PubMed, CINAHL, and ProQuest databases was performed. Included were studies in which walking was assessed with the Kinect and another gold standard device, and consisted of at least one numerical finding of spatiotemporal or kinematic measures. Our search identified 366 papers, from which 12 relevant studies were retrieved. The results demonstrate that the Kinect is valid only for some spatiotemporal gait parameters. Although the kinematic parameters measured by the Kinect followed the trend of the joint trajectories, they showed poor validity and large errors. In conclusion, the Kinect may have the potential to be used as a tool for measuring spatiotemporal aspects of gait, yet standardized methods should be established, and future examinations with both healthy subjects and clinical participants are required in order to integrate the Kinect as a clinical gait analysis tool. PMID:26861323
Predicting non-melanoma skin cancer via a multi-parameterized artificial neural network.
Roffman, David; Hart, Gregory; Girardi, Michael; Ko, Christine J; Deng, Jun
2018-01-26
Ultraviolet radiation (UVR) exposure and family history are major associated risk factors for the development of non-melanoma skin cancer (NMSC). The objective of this study was to develop and validate a multi-parameterized artificial neural network based on available personal health information for early detection of NMSC with high sensitivity and specificity, even in the absence of known UVR exposure and family history. The 1997-2015 NHIS adult survey data used to train and validate our neural network (NN) comprised of 2,056 NMSC and 460,574 non-cancer cases. We extracted 13 parameters for our NN: gender, age, BMI, diabetic status, smoking status, emphysema, asthma, race, Hispanic ethnicity, hypertension, heart diseases, vigorous exercise habits, and history of stroke. This study yielded an area under the ROC curve of 0.81 and 0.81 for training and validation, respectively. Our results (training sensitivity 88.5% and specificity 62.2%, validation sensitivity 86.2% and specificity 62.7%) were comparable to a previous study of basal and squamous cell carcinoma prediction that also included UVR exposure and family history information. These results indicate that our NN is robust enough to make predictions, suggesting that we have identified novel associations and potential predictive parameters of NMSC.
A validation procedure for a LADAR system radiometric simulation model
NASA Astrophysics Data System (ADS)
Leishman, Brad; Budge, Scott; Pack, Robert
2007-04-01
The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.
Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L
2016-04-01
Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
2013-01-01
Background Our previous model of the non-isometric muscle fatigue that occurs during repetitive functional electrical stimulation included models of force, motion, and fatigue and accounted for applied load but not stimulation pulse duration. Our objectives were to: 1) further develop, 2) validate, and 3) present outcome measures for a non-isometric fatigue model that can predict the effect of a range of pulse durations on muscle fatigue. Methods A computer-controlled stimulator sent electrical pulses to electrodes on the thighs of 25 able-bodied human subjects. Isometric and non-isometric non-fatiguing and fatiguing knee torques and/or angles were measured. Pulse duration (170–600 μs) was the independent variable. Measurements were divided into parameter identification and model validation subsets. Results The fatigue model was simplified by removing two of three non-isometric parameters. The third remained a function of other model parameters. Between 66% and 77% of the variability in the angle measurements was explained by the new model. Conclusion Muscle fatigue in response to different stimulation pulse durations can be predicted during non-isometric repetitive contractions. PMID:23374142
Progress in Validation of Wind-US for Ramjet/Scramjet Combustion
NASA Technical Reports Server (NTRS)
Engblom, William A.; Frate, Franco C.; Nelson, Chris C.
2005-01-01
Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.
Scalability Analysis and Use of Compression at the Goddard DAAC and End-to-End MODIS Transfers
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.
1998-01-01
The goal of this task is to analyze the performance of single and multiple FTP transfer between SCF's and the Goddard DAAC. We developed an analytic model to compute the performance of FTP sessions as a function of various key parameters, implemented the model as a program called FTP Analyzer, and carried out validations with real data obtained by running single and multiple FTP transfer between GSFC and the Miami SCF. The input parameters to the model include the mix to FTP sessions (scenario), and for each FTP session, the file size. The network parameters include the round trip time, packet loss rate, the limiting bandwidth of the network connecting the SCF to a DAAC, TCP's basic timeout, TCP's Maximum Segment Size, and TCP's Maximum Receiver's Window Size. The modeling approach used consisted of modeling TCP's overall throughput, computing TCP's delay per FTP transfer, and then solving a queuing network model that includes the FTP clients and servers.
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
Jastrzembski, Tiffany S; Charness, Neil
2007-12-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.
Challenges in Rotorcraft Acoustic Flight Prediction and Validation
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.
2003-01-01
Challenges associated with rotorcraft acoustic flight prediction and validation are examined. First, an outline of a state-of-the-art rotorcraft aeroacoustic prediction methodology is presented. Components including rotorcraft aeromechanics, high resolution reconstruction, and rotorcraft acoustic prediction arc discussed. Next, to illustrate challenges and issues involved, a case study is presented in which an analysis of flight data from a specific XV-15 tiltrotor acoustic flight test is discussed in detail. Issues related to validation of methodologies using flight test data are discussed. Primary flight parameters such as velocity, altitude, and attitude are discussed and compared for repeated flight conditions. Other measured steady state flight conditions are examined for consistency and steadiness. A representative example prediction is presented and suggestions are made for future research.
Zhou, Weichen; Ma, Yanyun; Zhang, Jun; Hu, Jingyi; Zhang, Menghan; Wang, Yi; Li, Yi; Wu, Lijun; Pan, Yida; Zhang, Yitong; Zhang, Xiaonan; Zhang, Xinxin; Zhang, Zhanqing; Zhang, Jiming; Li, Hai; Lu, Lungen; Jin, Li; Wang, Jiucun; Yuan, Zhenghong; Liu, Jie
2017-11-01
Liver biopsy is the gold standard to assess pathological features (eg inflammation grades) for hepatitis B virus-infected patients although it is invasive and traumatic; meanwhile, several gene profiles of chronic hepatitis B (CHB) have been separately described in relatively small hepatitis B virus (HBV)-infected samples. We aimed to analyse correlations among inflammation grades, gene expressions and clinical parameters (serum alanine amino transaminase, aspartate amino transaminase and HBV-DNA) in large-scale CHB samples and to predict inflammation grades by using clinical parameters and/or gene expressions. We analysed gene expressions with three clinical parameters in 122 CHB samples by an improved regression model. Principal component analysis and machine-learning methods including Random Forest, K-nearest neighbour and support vector machine were used for analysis and further diagnosis models. Six normal samples were conducted to validate the predictive model. Significant genes related to clinical parameters were found enriching in the immune system, interferon-stimulated, regulation of cytokine production, anti-apoptosis, and etc. A panel of these genes with clinical parameters can effectively predict binary classifications of inflammation grade (area under the ROC curve [AUC]: 0.88, 95% confidence interval [CI]: 0.77-0.93), validated by normal samples. A panel with only clinical parameters was also valuable (AUC: 0.78, 95% CI: 0.65-0.86), indicating that liquid biopsy method for detecting the pathology of CHB is possible. This is the first study to systematically elucidate the relationships among gene expressions, clinical parameters and pathological inflammation grades in CHB, and to build models predicting inflammation grades by gene expressions and/or clinical parameters as well. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Mejlholm, Ole; Dalgaard, Paw
2013-10-15
A new and extensive growth and growth boundary model for psychrotolerant Lactobacillus spp. was developed and validated for processed and unprocessed products of seafood and meat. The new model was developed by refitting and expanding an existing cardinal parameter model for growth and the growth boundary of lactic acid bacteria (LAB) in processed seafood (O. Mejlholm and P. Dalgaard, J. Food Prot. 70. 2485-2497, 2007). Initially, to estimate values for the maximum specific growth rate at the reference temperature of 25 °C (μref) and the theoretical minimum temperature that prevents growth of psychrotolerant LAB (T(min)), the existing LAB model was refitted to data from experiments with seafood and meat products reported not to include nitrite or any of the four organic acids evaluated in the present study. Next, dimensionless terms modelling the antimicrobial effect of nitrite, and acetic, benzoic, citric and sorbic acids on growth of Lactobacillus sakei were added to the refitted model, together with minimum inhibitory concentrations determined for the five environmental parameters. The new model including the effect of 12 environmental parameters, as well as their interactive effects, was successfully validated using 229 growth rates (μ(max) values) for psychrotolerant Lactobacillus spp. in seafood and meat products. Average bias and accuracy factor values of 1.08 and 1.27, respectively, were obtained when observed and predicted μ(max) values of psychrotolerant Lactobacillus spp. were compared. Thus, on average μ(max) values were only overestimated by 8%. The performance of the new model was equally good for seafood and meat products, and the importance of including the effect of acetic, benzoic, citric and sorbic acids and to a lesser extent nitrite in order to accurately predict growth of psychrotolerant Lactobacillus spp. was clearly demonstrated. The new model can be used to predict growth of psychrotolerant Lactobacillus spp. in seafood and meat products e.g. prediction of the time to a critical cell concentration of bacteria is considered useful for establishing the shelf life. In addition, the high number of environmental parameters included in the new model makes it flexible and suitable for product development as the effect of substituting one combination of preservatives with another can be predicted. In general, the performance of the new model was unacceptable for other types of LAB including Carnobacterium spp., Leuconostoc spp. and Weissella spp. © 2013.
Knowledge transmission model with differing initial transmission and retransmission process
NASA Astrophysics Data System (ADS)
Wang, Haiying; Wang, Jun; Small, Michael
2018-10-01
Knowledge transmission is a cyclic dynamic diffusion process. The rate of acceptance of knowledge differs upon whether or not the recipient has previously held the knowledge. In this paper, the knowledge transmission process is divided into an initial and a retransmission procedure, each with its own transmission and self-learning parameters. Based on epidemic spreading model, we propose a naive-evangelical-agnostic (VEA) knowledge transmission model and derive mean-field equations to describe the dynamics of knowledge transmission in homogeneous networks. Theoretical analysis identifies a criterion for the persistence of knowledge, i.e., the reproduction number R0 depends on the minor effective parameters between the initial and retransmission process. Moreover, the final size of evangelical individuals is only related to retransmission process parameters. Numerical simulations validate the theoretical analysis. Furthermore, the simulations indicate that increasing the initial transmission parameters, including first transmission and self-learning rates of naive individuals, can accelerate the velocity of knowledge transmission efficiently but have no effect on the final size of evangelical individuals. In contrast, the retransmission parameters, including retransmission and self-learning rates of agnostic individuals, have a significant effect on the rate of knowledge transmission, i.e., the larger parameters the greater final density of evangelical individuals.
Lumped-parameters equivalent circuit for condenser microphones modeling.
Esteves, Josué; Rufer, Libor; Ekeom, Didace; Basrour, Skandar
2017-10-01
This work presents a lumped parameters equivalent model of condenser microphone based on analogies between acoustic, mechanical, fluidic, and electrical domains. Parameters of the model were determined mainly through analytical relations and/or finite element method (FEM) simulations. Special attention was paid to the air gap modeling and to the use of proper boundary condition. Corresponding lumped-parameters were obtained as results of FEM simulations. Because of its simplicity, the model allows a fast simulation and is readily usable for microphone design. This work shows the validation of the equivalent circuit on three real cases of capacitive microphones, including both traditional and Micro-Electro-Mechanical Systems structures. In all cases, it has been demonstrated that the sensitivity and other related data obtained from the equivalent circuit are in very good agreement with available measurement data.
A review of the solar array manufacturing industry costing standards
NASA Technical Reports Server (NTRS)
1977-01-01
The solar array manufacturing industry costing standards model is designed to compare the cost of producing solar arrays using alternative manufacturing processes. Constructive criticism of the methodology used is intended to enhance its implementation as a practical design tool. Three main elements of the procedure include workbook format and presentation, theoretical model validity and standard financial parameters.
Global Reference Atmosphere Model (GRAM)
NASA Technical Reports Server (NTRS)
Woodrum, A. W.
1989-01-01
GRAM series of four-dimensional atmospheric model validated by years of data. GRAM program, still available. More current are Gram 86, which includes atmospheric data from 1986 and runs on DEC VAX, and GRAM 88, which runs on IBM 3084. Program generates altitude profiles of atmospheric parameters along any simulated trajectory through atmosphere, and also useful for global circulation and diffusion studies.
Identification of Reading Problems in First Grade within a Response-to-Intervention Framework
ERIC Educational Resources Information Center
Speece, Deborah L.; Schatschneider, Christopher; Silverman, Rebecca; Case, Lisa Pericola; Cooper, David H.; Jacobs, Dawn M.
2011-01-01
Models of Response to Intervention (RTI) include parameters of assessment and instruction. This study focuses on assessment with the purpose of developing a screening battery that validly and efficiently identifies first-grade children at risk for reading problems. In an RTI model, these children would be candidates for early intervention. We…
An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.
Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong
2017-06-01
Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.
NASA Astrophysics Data System (ADS)
Desai, C. S.; Sane, S. M.; Jenson, J. W.; Contractor, D. N.; Carlson, A. E.; Clark, P. U.
2006-12-01
This presentation, which is complementary to Part I (Jenson et al.), describes the application of the Disturbed State Concept (DSC) constitutive model to define the behavior of the deforming sediment (till) underlying glaciers and ice sheets. The DSC includes elastic, plastic, and creep strains, and microstructural changes leading to degradation, failure, and sometimes strengthening or healing. Here, we describe comprehensive laboratory experiments conducted on samples of two regionally significant tills deposited by the Laurentide Ice Sheet: the Tiskilwa Till and Sky Pilot Till. The tests are used to determine the parameters to calibrate the DSC model, which is validated with respect to the laboratory tests by comparing the predictions with test data used to find the parameters, and also comparing them with independent tests not used to find the parameters. Discussion of the results also includes comparison of the DSC model with the classical Mohr-Coulomb model, which has been commonly used for glacial tills. A numerical procedure based on finite element implementation of the DSC is used to simulate an idealized field problem, and its predictions are discussed. Based on these analyses, the unified DSC model is proposed to provide an improved model for subglacial tills compared to other models used commonly, and thus to provide the potential for improved predictions of ice sheet movements.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainina, Evguenia I.; McCune, D. E.; Luna, Maria L.
2012-05-31
The goal of this study was to validate the previously observed high biological kill performance of PAEROSOL, a semi-dry, micro-aerosol decontamination technology, against common HAI in a non-human subject trial within a hospital setting of Madigan Army Medical Center (MAMC) on Joint Base Lewis-McChord in Tacoma, Washington. In addition to validating the disinfecting efficacy of PAEROSOL, the objectives of the trial included a demonstration of PAEROSOL environmental safety, (i.e., impact to hospital interior materials and electronic equipment exposed during testing) and PAEROSOL parameters optimization for future deployment.
System and method for modeling and analyzing complex scenarios
Shevitz, Daniel Wolf
2013-04-09
An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.
Riley, Richard D.
2017-01-01
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
2009-01-01
Background Decisions about interim analysis and early stopping of clinical trials, as based on recommendations of Data Monitoring Committees (DMCs), have far reaching consequences for the scientific validity and clinical impact of a trial. Our aim was to evaluate the frequency and quality of the reporting on DMC composition and roles, interim analysis and early termination in pediatric trials. Methods We conducted a systematic review of randomized controlled clinical trials published from 2005 to 2007 in a sample of four general and four pediatric journals. We used full-text databases to identify trials which reported on DMCs, interim analysis or early termination, and included children or adolescents. Information was extracted on general trial characteristics, risk of bias, and a set of parameters regarding DMC composition and roles, interim analysis and early termination. Results 110 of the 648 pediatric trials in this sample (17%) reported on DMC or interim analysis or early stopping, and were included; 68 from general and 42 from pediatric journals. The presence of DMCs was reported in 89 of the 110 included trials (81%); 62 papers, including 46 of the 89 that reported on DMCs (52%), also presented information about interim analysis. No paper adequately reported all DMC parameters, and nine (15%) reported all interim analysis details. Of 32 trials which terminated early, 22 (69%) did not report predefined stopping guidelines and 15 (47%) did not provide information on statistical monitoring methods. Conclusions Reporting on DMC composition and roles, on interim analysis results and on early termination of pediatric trials is incomplete and heterogeneous. We propose a minimal set of reporting parameters that will allow the reader to assess the validity of trial results. PMID:20003383
2012-06-27
of the critical contributors to deviation include structural relaxation of the glass, thermal expansion of the molds, TRS and viscoelastic behavior...the critical contributors to deviation include structural relaxation of the glass, thermal expansion of the molds, TRS and viscoelastic behavior of the...data. In that article glass was modeled as purely viscous and thermal expansion was accounted for with a constant coefficient of thermal expansion (CTE
On the soft supersymmetry-breaking parameters in gauge-mediated models
NASA Astrophysics Data System (ADS)
Wagner, C. E. M.
1998-09-01
Gauge mediation of supersymmetry breaking in the observable sector is an attractive idea, which naturally alleviates the flavor changing neutral current problem of supersymmetric theories. Quite generally, however, the number and quantum number of the messengers are not known; nor is their characteristic mass scale determined by the theory. Using the recently proposed method to extract supersymmetry-breaking parameters from wave-function renormalization, we derived general formulae for the soft supersymmetry-breaking parameters in the observable sector, valid in the small and moderate tan β regimes, for the case of split messengers. The full leading-order effects of top Yukawa and gauge couplings on the soft supersymmetry-breaking parameters are included. We give a simple interpretation of the general formulae in terms of the renormalization group evolution of the soft supersymmetry-breaking parameters. As a by-product of this analysis, the one-loop renormalization group evolution of the soft supersymmetry-breaking parameters is obtained for arbitrary boundary conditions of the scalar and gaugino mass parameters at high energies.
NASA Astrophysics Data System (ADS)
Armante, Raymond; Scott, Noelle; Crevoisier, Cyril; Capelle, Virginie; Crepeau, Laurent; Jacquinet, Nicole; Chédin, Alain
2016-09-01
The quality of spectroscopic parameters that serve as input to forward radiative transfer models are essential to fully exploit remote sensing of Earth atmosphere. However, the process of updating spectroscopic databases in order to provide the users with a database that insures an optimal characterization of spectral properties of molecular absorption for radiative transfer modeling is challenging. The evaluation of the databases content and the underlying choices made by the managing team is thus a crucial step. Here, we introduce an original and powerful approach for evaluating spectroscopic parameters: the Spectroscopic Parameters And Radiative Transfer Evaluation (SPARTE) chain. The SPARTE chain relies on the comparison between forward radiative transfer simulations made by the 4A radiative transfer model and observations of spectra made from various observations collocated over several thousands of well-characterized atmospheric situations. Averaging the resulting 'calculated-observed spectral' residuals minimizes the random errors coming from both the radiometric noise of the instruments and the imperfect description of the atmospheric state. The SPARTE chain can be used to evaluate any spectroscopic databases, from the visible to the microwave, using any type of remote sensing observations (ground-based, airborne or space-borne). We show that the comparison of the shape of the residuals enables: (i) identifying incorrect line parameters (line position, intensity, width, pressure shift, etc.), even for molecules for which interferences between the lines have to be taken into account; (ii) proposing revised values, in cooperation with contributing teams; and (iii) validating the final updated parameters. In particular, we show that the simultaneous availability of two databases such as GEISA and HITRAN helps identifying remaining issues in each database. The SPARTE chain has been here applied to the validation of the update of GEISA-2015 in 2 spectral regions of particular interest for several currently exploited or planned Earth space missions: the thermal infrared domain and the short-wave infrared domain, for which observations from the space-borne IASI instrument and from the ground-based FTS instruments at the Parkfalls TCCON site are used respectively. Main results include: (i) the validation of the positions and intensities of line parameters, with overall significantly lower residuals for GEISA-2015 than for GEISA-2011 and (iii) the validation of the choice made on the parameters (such as pressure shift and air-broadened width) which has not been given by the provider but completed by ourselves. For example, comparisons between residuals obtained with GEISA-2015 and HITRAN-2012 have highlighted a specific issue with some HWHM values in the latter that can be clearly identified on the 'calculated-observed' residuals.
Xiang, Junfeng; Xie, Lijing; Gao, Feinong; Zhang, Yu; Yi, Jie; Wang, Tao; Pang, Siqin; Wang, Xibin
2018-01-01
Discrepancies in capturing material behavior of some materials, such as Particulate Reinforced Metal Matrix Composites, by using conventional ad hoc strategy make the applicability of Johnson-Cook constitutive model challenged. Despites applicable efforts, its extended formalism with more fitting parameters would increase the difficulty in identifying constitutive parameters. A weighted multi-objective strategy for identifying any constitutive formalism is developed to predict mechanical behavior in static and dynamic loading conditions equally well. These varying weighting is based on the Gaussian-distributed noise evaluation of experimentally obtained stress-strain data in quasi-static or dynamic mode. This universal method can be used to determine fast and directly whether the constitutive formalism is suitable to describe the material constitutive behavior by measuring goodness-of-fit. A quantitative comparison of different fitting strategies on identifying Al6063/SiCp’s material parameters is made in terms of performance evaluation including noise elimination, correlation, and reliability. Eventually, a three-dimensional (3D) FE model in small-hole drilling of Al6063/SiCp composites, using multi-objective identified constitutive formalism, is developed. Comparison with the experimental observations in thrust force, torque, and chip morphology provides valid evidence on the applicability of the developed multi-objective identification strategy in identifying constitutive parameters. PMID:29324688
Validating Satellite-Retrieved Cloud Properties for Weather and Climate Applications
NASA Astrophysics Data System (ADS)
Minnis, P.; Bedka, K. M.; Smith, W., Jr.; Yost, C. R.; Bedka, S. T.; Palikonda, R.; Spangenberg, D.; Sun-Mack, S.; Trepte, Q.; Dong, X.; Xi, B.
2014-12-01
Cloud properties determined from satellite imager radiances are increasingly used in weather and climate applications, particularly in nowcasting, model assimilation and validation, trend monitoring, and precipitation and radiation analyses. The value of using the satellite-derived cloud parameters is determined by the accuracy of the particular parameter for a given set of conditions, such as viewing and illumination angles, surface background, and cloud type and structure. Because of the great variety of those conditions and of the sensors used to monitor clouds, determining the accuracy or uncertainties in the retrieved cloud parameters is a daunting task. Sensitivity studies of the retrieved parameters to the various inputs for a particular cloud type are helpful for understanding the errors associated with the retrieval algorithm relative to the plane-parallel world assumed in most of the model clouds that serve as the basis for the retrievals. Real world clouds, however, rarely fit the plane-parallel mold and generate radiances that likely produce much greater errors in the retrieved parameter than can be inferred from sensitivity analyses. Thus, independent, empirical methods are used to provide a more reliable uncertainty analysis. At NASA Langley, cloud properties are being retrieved from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for climate monitoring and model validation as part of the NASA CERES project since 2000 and from AVHRR data since 1978 as part of the NOAA CDR program. Cloud properties are also being retrieved in near-real time globally from both GEO and LEO satellites for weather model assimilation and nowcasting for hazards such as aircraft icing. This paper discusses the various independent datasets and approaches that are used to assessing the imager-based satellite cloud retrievals. These include, but are not limited to data from ARM sites, CloudSat, and CALIPSO. This paper discusses the use of the various datasets available, the methods employed to utilize them in the cloud property retrieval validation process, and the results and how they aid future development of the retrieval algorithms. Future needs are also discussed.
Development of the Assessment of Belief Conflict in Relationship-14 (ABCR-14)
Kyougoku, Makoto; Teraoka, Mutsumi; Masuda, Noriko; Ooura, Mariko; Abe, Yasushi
2015-01-01
Purpose Nurses and other healthcare workers frequently experience belief conflict, one of the most important, new stress-related problems in both academic and clinical fields. Methods In this study, using a sample of 1,683 nursing practitioners, we developed The Assessment of Belief Conflict in Relationship-14 (ABCR-14), a new scale that assesses belief conflict in the healthcare field. Standard psychometric procedures were used to develop and test the scale, including a qualitative framework concept and item-pool development, item reduction, and scale development. We analyzed the psychometric properties of ABCR-14 according to entropy, polyserial correlation coefficient, exploratory factor analysis, confirmatory factor analysis, average variance extracted, Cronbach’s alpha, Pearson product-moment correlation coefficient, and multidimensional item response theory (MIRT). Results The results of the analysis supported a three-factor model consisting of 14 items. The validity and reliability of ABCR-14 was suggested by evidence from high construct validity, structural validity, hypothesis testing, internal consistency reliability, and concurrent validity. The result of the MIRT offered strong support for good item response of item slope parameters and difficulty parameters. However, the ABCR-14 Likert scale might need to be explored from the MIRT point of view. Yet, as mentioned above, there is sufficient evidence to support that ABCR-14 has high validity and reliability. Conclusion The ABCR-14 demonstrates good psychometric properties for nursing belief conflict. Further studies are recommended to confirm its application in clinical practice. PMID:26247356
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Implementation and application of an interactive user-friendly validation software for RADIANCE
NASA Astrophysics Data System (ADS)
Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.
2012-02-01
RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
Characterizing the GOES-R (GOES-16) Geostationary Lightning Mapper (GLM) On-Orbit Performance
NASA Technical Reports Server (NTRS)
Rudlosky, Scott D.; Goodman, Steven J.; Koshak, William J.; Blakeslee, Richard J.; Buechler, Dennis E.; Mach, Douglas M.; Bateman, Monte
2017-01-01
Two overlapping efforts help to characterize the GLM performance, the Post Launch Test (PLT) phase to validate the predicted pre-launch instrument performance and the Post Launch Product Test (PLPT) phase to validate the lightning detection product used in forecast and warning decision-making. This paper documents the calibration and validation plans and activities for the first 6 months of GLM on-orbit testing and validation commencing with first light on 4 January 2017. The PLT phase addresses image quality, on-orbit calibration, RTEP threshold tuning, image navigation, noise filtering, and solar intrusion assessment, resulting in a GLM calibration parameter file. The PLPT includes four main activities, the Reference Data Comparisons (RDC), Algorithm Testing (AT), Instrument Navigation and Registration Testing (INRT), and Long Term Baseline Testing (LTBT). Field campaigns are also designed to contribute valuable insights into the GLM performance capabilities. The PLPT tests each contribute to the beta, provisional, and fully validated GLM data.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
ERIC Educational Resources Information Center
Jastrzembski, Tiffany S.; Charness, Neil
2007-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20;…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
Kim, Yong Sun; Choi, Hyeong Ho; Cho, Young Nam; Park, Yong Jae; Lee, Jong B; Yang, King H; King, Albert I
2005-11-01
Although biomechanical studies on the knee-thigh-hip (KTH) complex have been extensive, interactions between the KTH and various vehicular interior design parameters in frontal automotive crashes for newer models have not been reported in the open literature to the best of our knowledge. A 3D finite element (FE) model of a 50(th) percentile male KTH complex, which includes explicit representations of the iliac wing, acetabulum, pubic rami, sacrum, articular cartilage, femoral head, femoral neck, femoral condyles, patella, and patella tendon, has been developed to simulate injuries such as fracture of the patella, femoral neck, acetabulum, and pubic rami of the KTH complex. Model results compared favorably against regional component test data including a three-point bending test of the femur, axial loading of the isolated knee-patella, axial loading of the KTH complex, axial loading of the femoral head, and lateral loading of the isolated pelvis. The model was further integrated into a Wayne State University upper torso model and validated against data obtained from whole body sled tests. The model was validated against these experimental data over a range of impact speeds, impactor masses and boundary conditions. Using Design Of Experiment (DOE) methods based on Taguchi's approach and the developed FE model of the whole body, including the KTH complex, eight vehicular interior design parameters, namely the load limiter force, seat belt elongation, pretensioner inlet amount, knee-knee bolster distance, knee bolster angle, knee bolster stiffness, toe board angle and impact speed, each with either two or three design levels, were simulated to predict their respective effects on the potential of KTH injury in frontal impacts. Simulation results proposed best design levels for vehicular interior design parameters to reduce the injury potential of the KTH complex due to frontal automotive crashes. This study is limited by the fact that prediction of bony fracture was based on an element elimination method available in the LS-DYNA code. No validation study was conducted to determine if this method is suitable when simulating fractures of biological tissues. More work is still needed to further validate the FE model of the KTH complex to increase its reliability in the assessment of various impact loading conditions associated with vehicular crash scenarios.
Validating the Heat Stress Indices for Using In Heavy Work Activities in Hot and Dry Climates.
Hajizadeh, Roohalah; Golbabaei, Farideh; Farhang Dehghan, Somayeh; Beheshti, Mohammad Hossein; Jafari, Sayed Mohammad; Taheri, Fereshteh
2016-01-01
Necessity of evaluating heat stress in the workplace, require validation of indices and selection optimal index. The present study aimed to assess the precision and validity of some heat stress indices and select the optimum index for using in heavy work activities in hot and dry climates. It carried out on 184 workers from 40 brick kilns workshops in the city of Qom, central Iran (as representative hot and dry climates). After reviewing the working process and evaluation the activity of workers and the type of work, environmental and physiological parameters according to standards recommended by International Organization for Standardization (ISO) including ISO 7243 and ISO 9886 were measured and indices were calculated. Workers engaged in indoor kiln experienced the highest values of natural wet temperature, dry temperature, globe temperature and relative humidity among studied sections (P<0.05). Indoor workplaces had the higher levels of all environmental parameters than outdoors (P=0.0001), except for air velocity. The wet-bulb globe temperature (WBGT) and heat stress index (HSI) indices had the highest correlation with other physiological parameters among the other heat stress indices. Relationship between WBGT index and carotid artery temperature (r=0.49), skin temperature (r=0.319), and oral temperature (r=0.203) was statistically significant (P=0.006). Since WBGT index, as the most applicable index for evaluating heat stress in workplaces is approved by ISO, and due to the positive features of WBGT such as ease of measurement and calculation, and with respect to some limitation in application of HSI; WBGT can be introduced as the most valid empirical index of heat stress in the brick workshops.
The Atmospheric Infrared Sounder- An Overview
NASA Technical Reports Server (NTRS)
Larnbrigtsen, Bjorn; Fetzer, Eric; Lee, Sung-Yung; Irion, Fredrick; Hearty, Thomas; Gaiser, Steve; Pagano, Thomas; Aumann, Hartmut; Chahine, Moustafa
2004-01-01
The Atmospheric Infrared Sounder (AIRS) was launched in May 2002. Along with two companion microwave sensors, it forms the AIRS Sounding Suite. This system is the most advanced atmospheric sounding system to date, with measurement accuracies far surpassing those available on current weather satellites. The data products are calibrated radiances from all three sensors and a number of derived geophysical parameters, including vertical temperature and humidity profiles, surface temperature, cloud fraction, cIoud top pressure, and profiles of ozone. These products are generated under cloudy as well as clear conditions. An ongoing calibration validation effort has confirmed that the system is very accurate and stable, and many of the geophysical parameters have been validated. AIRS is in some cases more accurate than any other source and can therefore be difficult to validate, but this offers interesting new research opportunities. The applications for the AIRS products range from numerical weather prediction to atmospheric research - where the AIRS water vapor products near the surface and in the mid to upper troposphere will make it possible to characterize and model phenomena that are key for short-term atmospheric processes, such as weather patterns, to long-term processes, such as interannual cycles (e.g., El Nino) and climate change.
Leischik, Roman; Littwitz, Henning; Dworrak, Birgit; Garg, Pankaj; Zhu, Meihua; Sahn, David J; Horlitz, Marc
2015-01-01
Left atrial (LA) functional analysis has an established role in assessing left ventricular diastolic function. The current standard echocardiographic parameters used to study left ventricular diastolic function include pulsed-wave Doppler mitral inflow analysis, tissue Doppler imaging measurements, and LA dimension estimation. However, the above-mentioned parameters do not directly quantify LA performance. Deformation studies using strain and strain-rate imaging to assess LA function were validated in previous research, but this technique is not currently used in routine clinical practice. This review discusses the history, importance, and pitfalls of strain technology for the analysis of LA mechanics.
Land Ice Verification and Validation Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-07-15
To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less
Jennifer A. Holm; H.H. Shugart; Skip J. Van Bloem; G.R. Larocque
2012-01-01
Because of human pressures, the need to understand and predict the long-term dynamics and development of subtropical dry forests is urgent. Through modifications to the ZELIG simulation model, including the development of species- and site-specific parameters and internal modifications, the capability to model and predict forest change within the 4500-ha Guanica State...
Lindberg, Ann-Sofie; Oksa, Juha; Antti, Henrik; Malm, Christer
2015-01-01
Physical capacity has previously been deemed important for firefighters physical work capacity, and aerobic fitness, muscular strength, and muscular endurance are the most frequently investigated parameters of importance. Traditionally, bivariate and multivariate linear regression statistics have been used to study relationships between physical capacities and work capacities among firefighters. An alternative way to handle datasets consisting of numerous correlated variables is to use multivariate projection analyses, such as Orthogonal Projection to Latent Structures. The first aim of the present study was to evaluate the prediction and predictive power of field and laboratory tests, respectively, on firefighters' physical work capacity on selected work tasks. Also, to study if valid predictions could be achieved without anthropometric data. The second aim was to externally validate selected models. The third aim was to validate selected models on firefighters' and on civilians'. A total of 38 (26 men and 12 women) + 90 (38 men and 52 women) subjects were included in the models and the external validation, respectively. The best prediction (R2) and predictive power (Q2) of Stairs, Pulling, Demolition, Terrain, and Rescue work capacities included field tests (R2 = 0.73 to 0.84, Q2 = 0.68 to 0.82). The best external validation was for Stairs work capacity (R2 = 0.80) and worst for Demolition work capacity (R2 = 0.40). In conclusion, field and laboratory tests could equally well predict physical work capacities for firefighting work tasks, and models excluding anthropometric data were valid. The predictive power was satisfactory for all included work tasks except Demolition.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Random sampling and validation of covariance matrices of resonance parameters
NASA Astrophysics Data System (ADS)
Plevnik, Lucijan; Zerovnik, Gašper
2017-09-01
Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Optimization of multilayer neural network parameters for speaker recognition
NASA Astrophysics Data System (ADS)
Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka
2016-05-01
This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.
Tactile Imaging Markers to Characterize Female Pelvic Floor Conditions.
van Raalte, Heather; Egorov, Vladimir
2015-08-01
The Vaginal Tactile Imager (VTI) records pressure patterns from vaginal walls under an applied tissue deformation and during pelvic floor muscle contractions. The objective of this study is to validate tactile imaging and muscle contraction parameters (markers) sensitive to the female pelvic floor conditions. Twenty-two women with normal and prolapse conditions were examined by a vaginal tactile imaging probe. We identified 9 parameters which were sensitive to prolapse conditions ( p < 0.05 for one-way ANOVA and/or p < 0.05 for t -test with correlation factor r from -0.73 to -0.56). The list of parameters includes pressure, pressure gradient and dynamic pressure response during muscle contraction at identified locations. These parameters may be used for biomechanical characterization of female pelvic floor conditions to support an effective management of pelvic floor prolapse.
Tactile Imaging Markers to Characterize Female Pelvic Floor Conditions
van Raalte, Heather; Egorov, Vladimir
2015-01-01
The Vaginal Tactile Imager (VTI) records pressure patterns from vaginal walls under an applied tissue deformation and during pelvic floor muscle contractions. The objective of this study is to validate tactile imaging and muscle contraction parameters (markers) sensitive to the female pelvic floor conditions. Twenty-two women with normal and prolapse conditions were examined by a vaginal tactile imaging probe. We identified 9 parameters which were sensitive to prolapse conditions (p < 0.05 for one-way ANOVA and/or p < 0.05 for t-test with correlation factor r from −0.73 to −0.56). The list of parameters includes pressure, pressure gradient and dynamic pressure response during muscle contraction at identified locations. These parameters may be used for biomechanical characterization of female pelvic floor conditions to support an effective management of pelvic floor prolapse. PMID:26389014
Pak, Kyoungjune; Kim, Keunyoung; Kim, Mi-Hyun; Eom, Jung Seop; Lee, Min Ki; Cho, Jeong Su; Kim, Yun Seong; Kim, Bum Soo; Kim, Seong Jang; Kim, In Joo
2018-01-01
We aimed to develop a decision tree model to improve diagnostic performance of positron emission tomography/computed tomography (PET/CT) to detect metastatic lymph nodes (LN) in non-small cell lung cancer (NSCLC). 115 patients with NSCLC were included in this study. The training dataset included 66 patients. A decision tree model was developed with 9 variables, and validated with 49 patients: short and long diameters of LNs, ratio of short and long diameters, maximum standardized uptake value (SUVmax) of LN, mean hounsfield unit, ratio of LN SUVmax and ascending aorta SUVmax (LN/AA), and ratio of LN SUVmax and superior vena cava SUVmax. A total of 301 LNs of 115 patients were evaluated in this study. Nodular calcification was applied as the initial imaging parameter, and LN SUVmax (≥3.95) was assessed as the second. LN/AA (≥2.92) was required to high LN SUVmax. Sensitivity was 50% for training dataset, and 40% for validation dataset. However, specificity was 99.28% for training dataset, and 96.23% for validation dataset. In conclusion, we have developed a new decision tree model for interpreting mediastinal LNs. All LNs with nodular calcification were benign, and LNs with high LN SUVmax and high LN/AA were metastatic Further studies are needed to incorporate subjective parameters and pathologic evaluations into a decision tree model to improve the test performance of PET/CT.
Effects of space environment on composites: An analytical study of critical experimental parameters
NASA Technical Reports Server (NTRS)
Gupta, A.; Carroll, W. F.; Moacanin, J.
1979-01-01
A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.
Lamping, Florian; Jack, Thomas; Rübsamen, Nicole; Sasse, Michael; Beerbaum, Philipp; Mikolajczyk, Rafael T; Boehne, Martin; Karch, André
2018-03-15
Since early antimicrobial therapy is mandatory in septic patients, immediate diagnosis and distinction from non-infectious SIRS is essential but hampered by the similarity of symptoms between both entities. We aimed to develop a diagnostic model for differentiation of sepsis and non-infectious SIRS in critically ill children based on routinely available parameters (baseline characteristics, clinical/laboratory parameters, technical/medical support). This is a secondary analysis of a randomized controlled trial conducted at a German tertiary-care pediatric intensive care unit (PICU). Two hundred thirty-eight cases of non-infectious SIRS and 58 cases of sepsis (as defined by IPSCC criteria) were included. We applied a Random Forest approach to identify the best set of predictors out of 44 variables measured at the day of onset of the disease. The developed diagnostic model was validated in a temporal split-sample approach. A model including four clinical (length of PICU stay until onset of non-infectious SIRS/sepsis, central line, core temperature, number of non-infectious SIRS/sepsis episodes prior to diagnosis) and four laboratory parameters (interleukin-6, platelet count, procalcitonin, CRP) was identified in the training dataset. Validation in the test dataset revealed an AUC of 0.78 (95% CI: 0.70-0.87). Our model was superior to previously proposed biomarkers such as CRP, interleukin-6, procalcitonin or a combination of CRP and procalcitonin (maximum AUC = 0.63; 95% CI: 0.52-0.74). When aiming at a complete identification of sepsis cases (100%; 95% CI: 87-100%), 28% (95% CI: 20-38%) of non-infectious SIRS cases were assorted correctly. Our approach allows early recognition of sepsis with an accuracy superior to previously described biomarkers, and could potentially reduce antibiotic use by 30% in non-infectious SIRS cases. External validation studies are necessary to confirm the generalizability of our approach across populations and treatment practices. ClinicalTrials.gov number: NCT00209768; registration date: September 21, 2005.
Validation Testing of a Peridynamic Impact Damage Model Using NASA's Micro-Particle Gun
NASA Technical Reports Server (NTRS)
Baber, Forrest E.; Zelinski, Brian J.; Guven, Ibrahim; Gray, Perry
2017-01-01
Through a collaborative effort between the Virginia Commonwealth University and Raytheon, a peridynamic model for sand impact damage has been developed1-3. Model development has focused on simulating impacts of sand particles on ZnS traveling at velocities consistent with aircraft take-off and landing speeds. The model reproduces common features of impact damage including pit and radial cracks, and, under some conditions, lateral cracks. This study focuses on a preliminary validation exercise in which simulation results from the peridynamic model are compared to a limited experimental data set generated by NASA's recently developed micro-particle gun (MPG). The MPG facility measures the dimensions and incoming and rebound velocities of the impact particles. It also links each particle to a specific impact site and its associated damage. In this validation exercise parameters of the peridynamic model are adjusted to fit the experimentally observed pit diameter, average length of radial cracks and rebound velocities for 4 impacts of 300 µm glass beads on ZnS. Results indicate that a reasonable fit of these impact characteristics can be obtained by suitable adjustment of the peridynamic input parameters, demonstrating that the MPG can be used effectively as a validation tool for impact modeling and that the peridynamic sand impact model described herein possesses not only a qualitative but also a quantitative ability to simulate sand impact events.
2008-01-01
PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.
Reliability, validity and feasibility of nail ultrasonography in psoriatic arthritis.
Arbault, Anaïs; Devilliers, Hervé; Laroche, Davy; Cayot, Audrey; Vabres, Pierre; Maillefert, Jean-Francis; Ornetti, Paul
2016-10-01
To determine the feasibility, reliability and validity of nails ultrasonography in psoriatic arthritis as an outcome measure. Pilot prospective single-centre study of eight ultrasonography parameters in B mode and power Doppler concerning the distal interphalangeal (DIP) joint, the matrix, the bed and nail plate. Intra-observer and inter-observer reliability was evaluated for the seven quantitative parameters (ICC and kappa). Correlations between ultrasonography and clinical variables were searched to assess external validity. Feasibility was assessed by the time to carry out the examination and the percentage of missing data. Twenty-seven patients with psoriatic arthritis (age 55.0±16.2 years, disease duration 13.4±9.4 years) were included. Of these, 67% presented nail involvement on ultrasonography vs 37% on physical examination (P<0.05). Reliability was good (ICC and weighted kappa>0.75) for the seven quantitative parameters, except for synovitis of the DIP joint in B mode. The synovitis of the DIP joint revealed by ultrasonography correlated with the total number of clinical synovitis and Doppler US of the nail (matrix and bed). Doppler US of the matrix correlated with VAS pain but not with the ASDAS-CRP or with clinical enthesitis. No significant correlation was found with US nail thickness. The feasibility and reliability of ultrasonography of the nail in psoriatic arthritis appear to be satisfactory. Among the eight parameters evaluated, power Doppler of the matrix which correlated with local inflammation (DIP joint and bed) and with VAS pain could become an interesting outcome measure, provided that it is also sensitive to change. Copyright © 2015 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Jian, L. K.; MacNeice, P. J.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; Jackson, B.; Yu, H.-S.; Riley, P.; Sokolov, I. V.
2016-01-01
The prediction of the background global solar wind is a necessary part of space weather forecasting. Several coronal and heliospheric models have been installed and/or recently upgraded at the Community Coordinated Modeling Center (CCMC), including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and Heliospheric tomography using interplanetary scintillation data. Ulysses recorded the last fast latitudinal scan from southern to northern poles in 2007. By comparing the modeling results with Ulysses observations over seven Carrington rotations, we have extended our third-party validation from the previous near-Earth solar wind to middle to high latitudes, in the same late declining phase of solar cycle 23. Besides visual comparison, wehave quantitatively assessed the models capabilities in reproducing the time series, statistics, and latitudinal variations of solar wind parameters for a specific range of model parameter settings, inputs, and grid configurations available at CCMC. The WSA-Enlil model results vary with three different magnetogram inputs.The MAS-Enlil model captures the solar wind parameters well, despite its underestimation of the speed at middle to high latitudes. The new version of SWMF misses many solar wind variations probably because it uses lower grid resolution than other models. The interplanetary scintillation-tomography cannot capture the latitudinal variations of solar wind well yet. Because the model performance varies with parameter settings which are optimized for different epochs or flow states, the performance metric study provided here can serve as a template that researchers can use to validate the models for the time periods and conditions of interest to them.
NASA Astrophysics Data System (ADS)
Jian, L. K.; MacNeice, P. J.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; Jackson, B.; Yu, H.-S.; Riley, P.; Sokolov, I. V.
2016-08-01
The prediction of the background global solar wind is a necessary part of space weather forecasting. Several coronal and heliospheric models have been installed and/or recently upgraded at the Community Coordinated Modeling Center (CCMC), including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and heliospheric tomography using interplanetary scintillation data. Ulysses recorded the last fast latitudinal scan from southern to northern poles in 2007. By comparing the modeling results with Ulysses observations over seven Carrington rotations, we have extended our third-party validation from the previous near-Earth solar wind to middle to high latitudes, in the same late declining phase of solar cycle 23. Besides visual comparison, we have quantitatively assessed the models' capabilities in reproducing the time series, statistics, and latitudinal variations of solar wind parameters for a specific range of model parameter settings, inputs, and grid configurations available at CCMC. The WSA-Enlil model results vary with three different magnetogram inputs. The MAS-Enlil model captures the solar wind parameters well, despite its underestimation of the speed at middle to high latitudes. The new version of SWMF misses many solar wind variations probably because it uses lower grid resolution than other models. The interplanetary scintillation-tomography cannot capture the latitudinal variations of solar wind well yet. Because the model performance varies with parameter settings which are optimized for different epochs or flow states, the performance metric study provided here can serve as a template that researchers can use to validate the models for the time periods and conditions of interest to them.
Linking the Weather Generator with Regional Climate Model: Effect of Higher Resolution
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin; Huth, Radan; Farda, Ales; Skalak, Petr
2014-05-01
This contribution builds on our last year EGU contribution, which followed two aims: (i) validation of the simulations of the present climate made by the ALADIN-Climate Regional Climate Model (RCM) at 25 km resolution, and (ii) presenting a methodology for linking the parametric weather generator (WG) with RCM output (aiming to calibrate a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations). Now we have available new higher-resolution (6.25 km) simulations with the same RCM. The main topic of this contribution is an anser to a following question: What is an effect of using a higher spatial resolution on a quality of simulating the surface weather characteristics? In the first part, the high resolution RCM simulation of the present climate will be validated in terms of selected WG parameters, which are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series. When comparing the WG parameters from the two sources (RCM vs observations), we interpolate the RCM-based parameters into the station locations while accounting for the effect of altitude. In the second part, we will discuss an effect of using the higher resolution: the results of the validation tests will be compared with those obtained with the lower-resolution RCM. Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Model Calibration Efforts for the International Space Station's Solar Array Mast
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.
2012-01-01
The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.
Posa, Mihalj; Pilipović, Ana; Lalić, Mladena; Popović, Jovan
2011-02-15
Linear dependence between temperature (t) and retention coefficient (k, reversed phase HPLC) of bile acids is obtained. Parameters (a, intercept and b, slope) of the linear function k=f(t) highly correlate with bile acids' structures. Investigated bile acids form linear congeneric groups on a principal component (calculated from k=f(t)) score plot that are in accordance with conformations of the hydroxyl and oxo groups in a bile acid steroid skeleton. Partition coefficient (K(p)) of nitrazepam in bile acids' micelles is investigated. Nitrazepam molecules incorporated in micelles show modified bioavailability (depo effect, higher permeability, etc.). Using multiple linear regression method QSAR models of nitrazepams' partition coefficient, K(p) are derived on the temperatures of 25°C and 37°C. For deriving linear regression models on both temperatures experimentally obtained lipophilicity parameters are included (PC1 from data k=f(t)) and in silico descriptors of the shape of a molecule while on the higher temperature molecular polarisation is introduced. This indicates the fact that the incorporation mechanism of nitrazepam in BA micelles changes on the higher temperatures. QSAR models are derived using partial least squares method as well. Experimental parameters k=f(t) are shown to be significant predictive variables. Both QSAR models are validated using cross validation and internal validation method. PLS models have slightly higher predictive capability than MLR models. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sarac, Abdulhamit; Kysar, Jeffrey W.
2018-02-01
We present a new methodology for experimental validation of single crystal plasticity constitutive relationships based upon spatially resolved measurements of the direction of the Net Burgers Density Vector, which we refer to as the β-field. The β-variable contains information about the active slip systems as well as the ratios of the Geometrically Necessary Dislocation (GND) densities on the active slip systems. We demonstrate the methodology by comparing single crystal plasticity finite element simulations of plane strain wedge indentations into face-centered cubic nickel to detailed experimental measurements of the β-field. We employ the classical Peirce-Asaro-Needleman (PAN) hardening model in this study due to the straightforward physical interpretation of its constitutive parameters that include latent hardening ratio, initial hardening modulus and the saturation stress. The saturation stress and the initial hardening modulus have relatively large influence on the β-variable compared to the latent hardening ratio. A change in the initial hardening modulus leads to a shift in the boundaries of plastic slip sectors with the plastically deforming region. As the saturation strength varies, both the magnitude of the β-variable and the boundaries of the plastic slip sectors change. We thus demonstrate that the β-variable is sensitive to changes in the constitutive parameters making the variable suitable for validation purposes. We identify a set of constitutive parameters that are consistent with the β-field obtained from the experiment.
Validated numerical simulation model of a dielectric elastomer generator
NASA Astrophysics Data System (ADS)
Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.
2013-04-01
Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 μm. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.
Fotina, I; Lütgendorf-Caucig, C; Stock, M; Pötter, R; Georg, D
2012-02-01
Inter-observer studies represent a valid method for the evaluation of target definition uncertainties and contouring guidelines. However, data from the literature do not yet give clear guidelines for reporting contouring variability. Thus, the purpose of this work was to compare and discuss various methods to determine variability on the basis of clinical cases and a literature review. In this study, 7 prostate and 8 lung cases were contoured on CT images by 8 experienced observers. Analysis of variability included descriptive statistics, calculation of overlap measures, and statistical measures of agreement. Cross tables with ratios and correlations were established for overlap parameters. It was shown that the minimal set of parameters to be reported should include at least one of three volume overlap measures (i.e., generalized conformity index, Jaccard coefficient, or conformation number). High correlation between these parameters and scatter of the results was observed. A combination of descriptive statistics, overlap measure, and statistical measure of agreement or reliability analysis is required to fully report the interrater variability in delineation.
Sensitivity of estimated muscle force in forward simulation of normal walking
Xiao, Ming; Higginson, Jill
2009-01-01
Generic muscle parameters are often used in muscle-driven simulations of human movement estimate individual muscle forces and function. The results may not be valid since muscle properties vary from subject to subject. This study investigated the effect of using generic parameters in a muscle-driven forward simulation on muscle force estimation. We generated a normal walking simulation in OpenSim and examined the sensitivity of individual muscle to perturbations in muscle parameters, including the number of muscles, maximum isometric force, optimal fiber length and tendon slack length. We found that when changing the number muscles included in the model, only magnitude of the estimated muscle forces was affected. Our results also suggest it is especially important to use accurate values of tendon slack length and optimal fiber length for ankle plantarflexors and knee extensors. Changes in force production one muscle were typically compensated for by changes in force production by muscles in the same functional muscle group, or the antagonistic muscle group. Conclusions regarding muscle function based on simulations with generic musculoskeletal parameters should be interpreted with caution. PMID:20498485
The application of neural networks to the SSME startup transient
NASA Technical Reports Server (NTRS)
Meyer, Claudia M.; Maul, William A.
1991-01-01
Feedforward neural networks were used to model three parameters during the Space Shuttle Main Engine startup transient. The three parameters were the main combustion chamber pressure, a controlled parameter, the high pressure oxidizer turbine discharge temperature, a redlined parameter, and the high pressure fuel pump discharge pressure, a failure-indicating performance parameter. Network inputs consisted of time windows of data from engine measurements that correlated highly to the modeled parameter. A standard backpropagation algorithm was used to train the feedforward networks on two nominal firings. Each trained network was validated with four additional nominal firings. For all three parameters, the neural networks were able to accurately predict the data in the validation sets as well as the training set.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Separated two-phase flow and basaltic eruptions
NASA Astrophysics Data System (ADS)
Vergniolle, Sylvie; Jaupart, Claude
1986-11-01
Fluid dynamical models of volcanic eruptions are usually made in the homogeneous approximation where gas and liquid are constrained to move at the same velocity. Basaltic eruptions exhibit the characteristics of separated flows, including transitions in their flow regime, from bubbly to slug flow in Strombolian eruptions and from bubbly to annular flow in Hawaiian ones. These regimes can be characterized by a parameter called the melt superficial velocity, or volume flux per unit cross section, which takes values between 10-3 and 10-2 m/s for bubbly and slug flow, and about 1 m/s for annular flow. We use two-phase flow equations to determine under which conditions the homogeneous approximation is not valid. In the bubbly regime, in which many bubbles rise through the moving liquid, there are large differences between the two-phase and homogeneous models, especially in the predictions of gas content and pressure. The homogeneous model is valid for viscous lavas such as dacites because viscosity impedes bubble motion. It is not valid for basaltic lavas if bubble sizes are greater than 1 cm, which is the case. Accordingly, basaltic eruptions should be characterized by lower gas contents and lower values of the exit pressure, and they rarely erupt in the mist and froth regimes, which are a feature of more viscous lavas. The two-phase flow framework allows for the treatment of different bubble populations, including vesicles due to exsolution by pressure release in the volcanic conduit and bubbles from the magma chamber. This yields information on poorly constrained parameters including the effective friction coefficient for the conduit, gas content, and bubble size in the chamber. We suggest that the observed flow transitions record changes in the amount and size of gas bubbles in the magma chamber at the conduit entry.
WFIRST: Coronagraph Systems Engineering and Performance Budgets
NASA Astrophysics Data System (ADS)
Poberezhskiy, Ilya; cady, eric; Frerking, Margaret A.; Kern, Brian; Nemati, Bijan; Noecker, Martin; Seo, Byoung-Joon; Zhao, Feng; Zhou, Hanying
2018-01-01
The WFIRST coronagraph instrument (CGI) will be the first in-space coronagraph using active wavefront control to directly image and characterize mature exoplanets and zodiacal disks in reflected starlight. For CGI systems engineering, including requirements development, CGI performance is predicted using a hierarchy of performance budgets to estimate various noise components — spatial and temporal flux variations — that obscure exoplanet signals in direct imaging and spectroscopy configurations. These performance budgets are validated through a robust integrated modeling and testbed model validation efforts.We present the performance budgeting framework used by WFIRST for the flow-down of coronagraph science requirements, mission constraints, and observatory interfaces to measurable instrument engineering parameters.
NASA Astrophysics Data System (ADS)
Yamanishi, Manabu
A combined experimental and computational investigation was performed in order to evaluate the effects of various design parameters of an in-line injection pump on the nozzle exit characteristics for DI diesel engines. Measurements of the pump chamber pressure and the delivery valve lift were included for validation by using specially designed transducers installed inside the pump. The results confirm that the simulation model is capable of predicting the pump operation for all the different designs investigated pump operating conditions. Following the successful validation of this model, parametric studies were performed which allow for improved fuel injection system design.
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
Burgansky-Eliash, Zvia; Wollstein, Gadi; Chu, Tianjiao; Ramsey, Joseph D.; Glymour, Clark; Noecker, Robert J.; Ishikawa, Hiroshi; Schuman, Joel S.
2007-01-01
Purpose Machine-learning classifiers are trained computerized systems with the ability to detect the relationship between multiple input parameters and a diagnosis. The present study investigated whether the use of machine-learning classifiers improves optical coherence tomography (OCT) glaucoma detection. Methods Forty-seven patients with glaucoma (47 eyes) and 42 healthy subjects (42 eyes) were included in this cross-sectional study. Of the glaucoma patients, 27 had early disease (visual field mean deviation [MD] ≥ −6 dB) and 20 had advanced glaucoma (MD < −6 dB). Machine-learning classifiers were trained to discriminate between glaucomatous and healthy eyes using parameters derived from OCT output. The classifiers were trained with all 38 parameters as well as with only 8 parameters that correlated best with the visual field MD. Five classifiers were tested: linear discriminant analysis, support vector machine, recursive partitioning and regression tree, generalized linear model, and generalized additive model. For the last two classifiers, a backward feature selection was used to find the minimal number of parameters that resulted in the best and most simple prediction. The cross-validated receiver operating characteristic (ROC) curve and accuracies were calculated. Results The largest area under the ROC curve (AROC) for glaucoma detection was achieved with the support vector machine using eight parameters (0.981). The sensitivity at 80% and 95% specificity was 97.9% and 92.5%, respectively. This classifier also performed best when judged by cross-validated accuracy (0.966). The best classification between early glaucoma and advanced glaucoma was obtained with the generalized additive model using only three parameters (AROC = 0.854). Conclusions Automated machine classifiers of OCT data might be useful for enhancing the utility of this technology for detecting glaucomatous abnormality. PMID:16249492
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Limited-sampling strategies for anti-infective agents: systematic review.
Sprague, Denise A; Ensom, Mary H H
2009-09-01
Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or the characteristics of included patients, which limited their generalizability. Many limited-sampling strategies have been developed for anti-infective agents that do not have a clearly established link between AUC and clinical outcomes in humans. Future studies should first determine if there is an association between AUC monitoring and clinical outcomes. Thereafter, it may be worthwhile to prospectively develop and validate a limited-sampling strategy for the particular anti-infective agent in a similar population.
Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart
Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin
2015-01-01
Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546
2014-01-01
Background Patient-reported outcome validation needs to achieve validity and reliability standards. Among reliability analysis parameters, test-retest reliability is an important psychometric property. Retested patients must be in a clinically stable condition. This is particularly problematic in palliative care (PC) settings because advanced cancer patients are prone to a faster rate of clinical deterioration. The aim of this study was to evaluate the methods by which multi-symptom and health-related qualities of life (HRQoL) based on patient-reported outcomes (PROs) have been validated in oncological PC settings with regards to test-retest reliability. Methods A systematic search of PubMed (1966 to June 2013), EMBASE (1980 to June 2013), PsychInfo (1806 to June 2013), CINAHL (1980 to June 2013), and SCIELO (1998 to June 2013), and specific PRO databases was performed. Studies were included if they described a set of validation studies. Studies were included if they described a set of validation studies for an instrument developed to measure multi-symptom or multidimensional HRQoL in advanced cancer patients under PC. The COSMIN checklist was used to rate the methodological quality of the study designs. Results We identified 89 validation studies from 746 potentially relevant articles. From those 89 articles, 31 measured test-retest reliability and were included in this review. Upon critical analysis of the overall quality of the criteria used to determine the test-retest reliability, 6 (19.4%), 17 (54.8%), and 8 (25.8%) of these articles were rated as good, fair, or poor, respectively, and no article was classified as excellent. Multi-symptom instruments were retested over a shortened interval when compared to the HRQoL instruments (median values 24 hours and 168 hours, respectively; p = 0.001). Validation studies that included objective confirmation of clinical stability in their design yielded better results for the test-retest analysis with regard to both pain and global HRQoL scores (p < 0.05). The quality of the statistical analysis and its description were of great concern. Conclusion Test-retest reliability has been infrequently and poorly evaluated. The confirmation of clinical stability was an important factor in our analysis, and we suggest that special attention be focused on clinical stability when designing a PRO validation study that includes advanced cancer patients under PC. PMID:24447633
Lee, Myungmo; Song, Changho; Lee, Kyoungjin; Shin, Doochul; Shin, Seungho
2014-07-14
Treadmill gait analysis was more advantageous than over-ground walking because it allowed continuous measurements of the gait parameters. The purpose of this study was to investigate the concurrent validity and the test-retest reliability of the OPTOGait photoelectric cell system against the treadmill-based gait analysis system by assessing spatio-temporal gait parameters. Twenty-six stroke patients and 18 healthy adults were asked to walk on the treadmill at their preferred speed. The concurrent validity was assessed by comparing data obtained from the 2 systems, and the test-retest reliability was determined by comparing data obtained from the 1st and the 2nd session of the OPTOGait system. The concurrent validity, identified by the intra-class correlation coefficients (ICC [2, 1]), coefficients of variation (CVME), and 95% limits of agreement (LOA) for the spatial-temporal gait parameters, were excellent but the temporal parameters expressed as a percentage of the gait cycle were poor. The test-retest reliability of the OPTOGait System, identified by ICC (3, 1), CVME, 95% LOA, standard error of measurement (SEM), and minimum detectable change (MDC95%) for the spatio-temporal gait parameters, was high. These findings indicated that the treadmill-based OPTOGait System had strong concurrent validity and test-retest reliability. This portable system could be useful for clinical assessments.
Determination of some phenolic compounds in red wine by RP-HPLC: method development and validation.
Burin, Vívian Maria; Arcari, Stefany Grützmann; Costa, Léa Luzia Freitas; Bordignon-Luiz, Marilde T
2011-09-01
A methodology employing reversed-phase high-performance liquid chromatography (RP-HPLC) was developed and validated for simultaneous determination of five phenolic compounds in red wine. The chromatographic separation was carried out in a C(18) column with water acidify with acetic acid (pH 2.6) (solvent A) and 20% solvent A and 80% acetonitrile (solvent B) as the mobile phase. The validation parameters included: selectivity, linearity, range, limits of detection and quantitation, precision and accuracy, using an internal standard. All calibration curves were linear (R(2) > 0.999) within the range, and good precision (RSD < 2.6%) and recovery (80-120%) was obtained for all compounds. This method was applied to quantify phenolics in red wine samples from Santa Catarina State, Brazil, and good separation peaks for phenolic compounds in these wines were observed.
Tablet PC Enabled Body Sensor System for Rural Telehealth Applications
Panicker, Nitha V.; Kumar, A. Sukesh
2016-01-01
Telehealth systems benefit from the rapid growth of mobile communication technology for measuring physiological signals. Development and validation of a tablet PC enabled noninvasive body sensor system for rural telehealth application are discussed in this paper. This system includes real time continuous collection of physiological parameters (blood pressure, pulse rate, and temperature) and fall detection of a patient with the help of a body sensor unit and wireless transmission of the acquired information to a tablet PC handled by the medical staff in a Primary Health Center (PHC). Abnormal conditions are automatically identified and alert messages are given to the medical officer in real time. Clinical validation is performed in a real environment and found to be successful. Bland-Altman analysis is carried out to validate the wrist blood pressure sensor used. The system works well for all measurements. PMID:26884757
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Rashid, Md Mamunur; Lee, Hyunbeom; Jung, Byung Hwa
2018-01-01
PP242 is a second generation novel selective ATP-competitive inhibitor of mTOR that displayed promising anti-cancer activity over several cancer types by inhibiting both the complexes of mTOR (mTORC1 and mTORC2). The purpose of this study is to identify the possible metabolites and to evaluate the pharmacokinetic profile of PP242 after a single oral administration to Sprague-Dawley (SD) rats. Two metabolites, including one phase I and one phase II, were identified by in vitro and in vivo studies using rat liver microsomes (RLMs) as well as rat plasma, urine and feces, respectively, through ultra high-performance liquid chromatography-linear ion trap quadrupole-orbitrap-mass spectrometry (UHPLC-LTQ-Orbitrap-MS). The major biotransformation pathways of PP242 were hydroxylation and glucuronide conjugation. Additionally, a simple and rapid quantification method was developed and validated. The method recovery was within 79.7-84.6%, whereas the matrix effect was 78.1-96.0% in all three quality control (QC) concentrations (low, medium and high) including the LLOQ. Other parameters showed acceptable results according to the US food and drug administration (FDA) guidelines for bioanalytical method validation. Afterwards, pharmacokinetic parameters were evaluated in rat plasma by successfully applying the validated method using liquid chromatography-tandem mass spectrometry (LC-MS/MS). After a single oral administration at a dose of 5mg/kg, the maximum plasma concentration (C max ) of PP242 was 0.17±0.08μg/mL, while the elimination was moderately fast (T 1/2 : 172.18±45.54min). All of the obtained information on the metabolite identification and pharmacokinetic parameter elucidation could facilitate the further development of PP242. Copyright © 2017 Elsevier B.V. All rights reserved.
Smart, Jonathan J.; Chin, Andrew; Baje, Leontine; Green, Madeline E.; Appleyard, Sharon A.; Tobin, Andrew J.; Simpfendorfer, Colin A.; White, William T.
2016-01-01
Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species’ life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were L¯∞ = 159 cm TL and L¯0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments. PMID:27058734
Comprehensive analysis of transport aircraft flight performance
NASA Astrophysics Data System (ADS)
Filippone, Antonio
2008-04-01
This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.
Smart, Jonathan J; Chin, Andrew; Baje, Leontine; Green, Madeline E; Appleyard, Sharon A; Tobin, Andrew J; Simpfendorfer, Colin A; White, William T
2016-01-01
Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species' life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were L∞ = 159 cm TL and L0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments.
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.
NASA Astrophysics Data System (ADS)
Stockhoff, Mariele; Jan, Sebastien; Dubois, Albertine; Cherry, Simon R.; Roncali, Emilie
2017-06-01
Typical PET detectors are composed of a scintillator coupled to a photodetector that detects scintillation photons produced when high energy gamma photons interact with the crystal. A critical performance factor is the collection efficiency of these scintillation photons, which can be optimized through simulation. Accurate modelling of photon interactions with crystal surfaces is essential in optical simulations, but the existing UNIFIED model in GATE is often inaccurate, especially for rough surfaces. Previously a new approach for modelling surface reflections based on measured surfaces was validated using custom Monte Carlo code. In this work, the LUT Davis model is implemented and validated in GATE and GEANT4, and is made accessible for all users in the nuclear imaging research community. Look-up-tables (LUTs) from various crystal surfaces are calculated based on measured surfaces obtained by atomic force microscopy. The LUTs include photon reflection probabilities and directions depending on incidence angle. We provide LUTs for rough and polished surfaces with different reflectors and coupling media. Validation parameters include light output measured at different depths of interaction in the crystal and photon track lengths, as both parameters are strongly dependent on reflector characteristics and distinguish between models. Results from the GATE/GEANT4 beta version are compared to those from our custom code and experimental data, as well as the UNIFIED model. GATE simulations with the LUT Davis model show average variations in light output of <2% from the custom code and excellent agreement for track lengths with R 2 > 0.99. Experimental data agree within 9% for relative light output. The new model also simplifies surface definition, as no complex input parameters are needed. The LUT Davis model makes optical simulations for nuclear imaging detectors much more precise, especially for studies with rough crystal surfaces. It will be available in GATE V8.0.
Stability evaluation of quality parameters for palm oil products at low temperature storage.
Ramli, Nur Aainaa Syahirah; Mohd Noor, Mohd Azmil; Musa, Hajar; Ghazali, Razmah
2018-07-01
Palm oil is one of the major oils and fats produced and traded worldwide. The value of palm oil products is mainly influenced by their quality. According to ISO 17025:2005, accredited laboratories require a quality control procedure with respect to monitoring the validity of tests for determination of quality parameters. This includes the regular use of internal quality control using secondary reference materials. Unfortunately, palm oil reference materials are not currently available. To establish internal quality control samples, the stability of quality parameters needs to be evaluated. In the present study, the stability of quality parameters for palm oil products was examined over 10 months at low temperature storage (6 ± 2 °C). The palm oil products tested included crude palm oil (CPO); refined, bleached and deodorized (RBD) palm oil (RBDPO); RBD palm olein (RBDPOo); and RBD palm stearin (RBDPS). The quality parameters of the oils [i.e. moisture content, free fatty acid content (FFA), iodine value (IV), fatty acids composition (FAC) and slip melting point (SMP)] were determined prior to and throughout the storage period. The moisture, FFA, IV, FAC and SMP for palm oil products changed significantly (P < 0.05), whereas the moisture content for CPO, IV for RBDPO and RBDPOo, stearic acid composition for CPO and linolenic acid composition for CPO, RBDPO, RBDPOo and RBDPS did not (P > 0.05). The stability study indicated that the quality of the palm oil products was stable within the specified limits throughout the storage period at low temperature. The storage conditions preserved the quality of palm oil products throughout the storage period. These findings qualify the use of the palm oil products CPO, RBDPO, RBDPOo and RBDPS as control samples in the validation of test results. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Calculation of Optical Parameters of Liquid Crystals
NASA Astrophysics Data System (ADS)
Kumar, A.
2007-12-01
Validation of a modified four-parameter model describing temperature effect on liquid crystal refractive indices is being reported in the present article. This model is based upon the Vuks equation. Experimental data of ordinary and extraordinary refractive indices for two liquid crystal samples MLC-9200-000 and MLC-6608 are used to validate the above-mentioned theoretical model. Using these experimental data, birefringence, order parameter, normalized polarizabilities, and the temperature gradient of refractive indices are determined. Two methods: directly using birefringence measurements and using Haller's extrapolation procedure are adopted for the determination of order parameter. Both approches of order parameter calculation are compared. The temperature dependences of all these parameters are discussed. A close agreement between theory and experiment is obtained.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, E. S. M.; McEwen, M. R.; Rogers, D. W. O.
2012-11-15
Purpose: In a recent computational study, an improved physics-based approach was proposed for unfolding linac photon spectra and incident electron energies from transmission data. In this approach, energy differentiation is improved by simultaneously using transmission data for multiple attenuators and detectors, and the unfolding robustness is improved by using a four-parameter functional form to describe the photon spectrum. The purpose of the current study is to validate this approach experimentally, and to demonstrate its application on a typical clinical linac. Methods: The validation makes use of the recent transmission measurements performed on the Vickers research linac of National Research Councilmore » Canada. For this linac, the photon spectra were previously measured using a NaI detector, and the incident electron parameters are independently known. The transmission data are for eight beams in the range 10-30 MV using thick Be, Al and Pb bremsstrahlung targets. To demonstrate the approach on a typical clinical linac, new measurements are performed on an Elekta Precise linac for 6, 10 and 25 MV beams. The different experimental setups are modeled using EGSnrc, with the newly added photonuclear attenuation included. Results: For the validation on the research linac, the 95% confidence bounds of the unfolded spectra fall within the noise of the NaI data. The unfolded spectra agree with the EGSnrc spectra (calculated using independently known electron parameters) with RMS energy fluence deviations of 4.5%. The accuracy of unfolding the incident electron energy is shown to be {approx}3%. A transmission cutoff of only 10% is suitable for accurate unfolding, provided that the other components of the proposed approach are implemented. For the demonstration on a clinical linac, the unfolded incident electron energies and their 68% confidence bounds for the 6, 10 and 25 MV beams are 6.1 {+-} 0.1, 9.3 {+-} 0.1, and 19.3 {+-} 0.2 MeV, respectively. The unfolded spectra for the clinical linac agree with the EGSnrc spectra (calculated using the unfolded electron energies) with RMS energy fluence deviations of 3.7%. The corresponding measured and EGSnrc-calculated transmission data agree within 1.5%, where the typical transmission measurement uncertainty on the clinical linac is 0.4% (not including the uncertainties on the incident electron parameters). Conclusions: The approach proposed in an earlier study for unfolding photon spectra and incident electron energies from transmission data is accurate and practical for clinical use.« less
Arbitrary temporal shape pulsed fiber laser based on SPGD algorithm
NASA Astrophysics Data System (ADS)
Jiang, Min; Su, Rongtao; Zhang, Pengfei; Zhou, Pu
2018-06-01
A novel adaptive pulse shaping method for a pulsed master oscillator power amplifier fiber laser to deliver an arbitrary pulse shape is demonstrated. Numerical simulation has been performed to validate the feasibility of the scheme and provide meaningful guidance for the design of the algorithm control parameters. In the proof-of-concept experiment, information on the temporal property of the laser is exchanged and evaluated through a local area network, and the laser adjusted the parameters of the seed laser according to the monitored output of the system automatically. Various pulse shapes, including a rectangular shape, ‘M’ shape, and elliptical shape are achieved through experimental iterations.
NASA Technical Reports Server (NTRS)
Greenstadt, E. W.
1975-01-01
The validity is investigated of a suggested model according to which Pc 3 and/or Pc 4 micropulsations are excited by magnetosheath field (and plasma) fluctuations arising in the quasi-parallel structure of the subsolar bow shock. The influence of solar wind plasma parameters on local shock structure and on the configuration of the entire bow shock system is included. Simultaneous data from two or more spacecraft and from multiple diagnostics is used to evaluate the geometrical factor, field-to-shock normal angle, or its B-X equivalent, and the principal plasma parameters. Results are presented and discussed.
Objectification of steering feel around straight-line driving for vehicle/tyre design
NASA Astrophysics Data System (ADS)
Kim, Jungsik; Yoon, Yong-San
2015-02-01
This paper presents the objectification techniques for the assessment of steering feel including {on-centre} feel and steering response by measurement data. Here, new objective parameters are developed by considering not only the process by which the steering feel is evaluated subjectively but also by the ergonomic perceptive sensitivity of the driver. In order to validate such objective parameters, subjective tests are carried out by professional drivers. Objective measurements are also performed for several cars at a proving ground. The linear correlation coefficients between the subjective ratings and the objective parameters are calculated. As one of new objective parameters, steering wheel angle defined by ergonomic perception sensitivity shows high correlation with the subjective questionnaires of on-center responses. Newly defined steering torque curvature also shows high correlation with the subjective questionnaires of on-center effort. These correlation results conclude that the subjective assessment of steering feel can be successfully explained and objectified by means of the suggested objective parameters.
Errors in reporting on dissolution research: methodological and statistical implications.
Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria
2017-02-01
In vitro dissolution testing provides useful information at clinical and preclinical stages of the drug development process. The study includes pharmaceutical papers on dissolution research published in Polish journals between 2010 and 2015. They were analyzed with regard to information provided by authors about chosen methods, performed validation, statistical reporting or assumptions used to properly compare release profiles considering the present guideline documents addressed to dissolution methodology and its validation. Of all the papers included in the study, 23.86% presented at least one set of validation parameters, 63.64% gave the results of the weight uniformity test, 55.68% content determination, 97.73% dissolution testing conditions, and 50% discussed a comparison of release profiles. The assumptions for methods used to compare dissolution profiles were discussed in 6.82% of papers. By means of example analyses, we demonstrate that the outcome can be influenced by the violation of several assumptions or selection of an improper method to compare dissolution profiles. A clearer description of the procedures would undoubtedly increase the quality of papers in this area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry
2013-05-01
Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.
NASA Astrophysics Data System (ADS)
Moussaoui, H.; Debayle, J.; Gavet, Y.; Delette, G.; Hubert, M.; Cloetens, P.; Laurencin, J.
2017-03-01
A strong correlation exists between the performance of Solid Oxide Cells (SOCs), working either in fuel cell or electrolysis mode, and their electrodes microstructure. However, the basic relationships between the three-dimensional characteristics of the microstructure and the electrode properties are not still precisely understood. Thus, several studies have been recently proposed in an attempt to improve the knowledge of such relations, which are essential before optimizing the microstructure, and hence, designing more efficient SOC electrodes. In that frame, an original model has been adapted to generate virtual 3D microstructures of typical SOCs electrodes. Both the oxygen electrode, which is made of porous LSCF, and the hydrogen electrodes, made of porous Ni-YSZ, have been studied. In this work, the synthetic microstructures are generated by the so-called 3D Gaussian `Random Field model'. The morphological representativeness of the virtual porous media have been validated on real 3D electrode microstructures of a commercial cell, obtained by X-ray nano-tomography at the European Synchrotron Radiation Facility (ESRF). This validation step includes the comparison of the morphological parameters like the phase covariance function and granulometry as well as the physical parameters like the `apparent tortuosity'. Finally, this validated tool will be used, in forthcoming studies, to identify the optimal microstructure of SOCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
Lee, Ji Sun; Cho, Soo Hee; Lim, Chae Mi; Chang, Moon Ik; Joo, Hyun Jin; Park, Hyun Jin
2017-01-01
A confirmatory and quantitative method of liquid chromatography-tandem mass spectrometry (LC-MS/MS) for the determination of mebendazole and its hydrolyzed and reduced metabolites in pork, chicken, and horse muscles was developed and validated in this study. Anthelmintic compounds were extracted with ethyl acetate after sample mixture was made alkaline followed by liquid chromatographic separation using a reversed phase C18 column. Gradient elution was performed with a mobile phase consisting of water containing 10 mM ammonium formate and methanol. This confirmatory method was validated according to EU requirements. Evaluated validation parameters included specificity, accuracy, precision (repeatability and within-laboratory reproducibility), analytical limits (decision limit and detection limit), and applicability. Most parameters were proved to be conforming to the EU requirements. The decision limit (CCα) and detection capability (CCβ) for all analytes ranged from 15.84 to 17.96 μgkg-1. The limit of detection (LOD) and the limit of quantification (LOQ) for all analytes were 0.07 μgkg-1 and 0.2 μgkg-1, respectively. The developed method was successfully applied to monitoring samples collected from the markets in major cities and proven great potential to be used as a regulatory tool to determine mebendazole residues in animal based foods. PMID:28085912
Validity and reliability of the session-RPE method for quantifying training load in karate athletes.
Tabben, M; Tourny, C; Haddad, M; Chaabane, H; Chamari, K; Coquart, J B
2015-04-24
To test the construct validity and reliability of the session rating of perceived exertion (sRPE) method by examining the relationship between RPE and physiological parameters (heart rate: HR and blood lactate concentration: [La --] ) and the correlations between sRPE and two HR--based methods for quantifying internal training load (Banister's method and Edwards's method) during karate training camp. Eighteen elite karate athletes: ten men (age: 24.2 ± 2.3 y, body mass: 71.2 ± 9.0 kg, body fat: 8.2 ± 1.3% and height: 178 ± 7 cm) and eight women (age: 22.6 ± 1.2 y, body mass: 59.8 ± 8.4 kg, body fat: 20.2 ± 4.4%, height: 169 ± 4 cm) were included in the study. During training camp, subjects participated in eight karate--training sessions including three training modes (4 tactical--technical, 2 technical--development, and 2 randori training), during which RPE, HR, and [La -- ] were recorded. Significant correlations were found between RPE and physiological parameters (percentage of maximal HR: r = 0.75, 95% CI = 0.64--0.86; [La --] : r = 0.62, 95% CI = 0.49--0.75; P < 0.001). Moreover, individual sRPE was significantly correlated with two HR--based methods for quantifying internal training load ( r = 0.65--0.95; P < 0.001). The sRPE method showed the high reliability of the same intensity across training sessions (Cronbach's α = 0.81, 95% CI = 0.61--0.92). This study demonstrates that the sRPE method is valid for quantifying internal training load and intensity in karate.
Development and Evaluation of a Sandia Cooler-based Refrigerator Condenser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Terry A.; Kariya, Harumichi Arthur; Leick, Michael T.
This report describes the first design of a refrigerator condenser using the Sandia Cooler, i.e. air - bearing supported rotating heat - sink impeller. The project included ba seline performance testing of a residential refrigerator, analysis and design development of a Sandia Cooler condenser assembly including a spiral channel baseplate, and performance measurement and validation of this condenser system as incorporated into the residential refrigerator. Comparable performance was achieved in a 60% smaller volume package. The improved modeling parameters can now be used to guide more optimized designs and more accurately predict performance.
40 CFR 63.5725 - What are the requirements for monitoring and demonstrating continuous compliance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Pollutants for Boat Manufacturing Demonstrating Compliance for Open Molding Operations Controlled by Add-on... successive cycles of operation to have a valid hour of data. (2) You must have valid data from at least 90... parameter monitoring system and collect emission capture system and add-on control device parameter data at...
40 CFR 63.5725 - What are the requirements for monitoring and demonstrating continuous compliance?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Pollutants for Boat Manufacturing Demonstrating Compliance for Open Molding Operations Controlled by Add-on... successive cycles of operation to have a valid hour of data. (2) You must have valid data from at least 90... parameter monitoring system and collect emission capture system and add-on control device parameter data at...
Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente
2016-05-01
Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.
100-lbf LO2/CH4 RCS Thruster Testing and Validation
NASA Technical Reports Server (NTRS)
Barnes, Frank; Cannella, Matthew; Gomez, Carlos; Hand, Jeffrey; Rosenberg, David
2009-01-01
100 pound thrust liquid Oxygen-Methane thruster sized for RCS (Reaction Control System) applications. Innovative Design Characteristics include: a) Simple compact design with minimal part count; b) Gaseous or Liquid propellant operation; c) Affordable and Reusable; d) Greater flexibility than existing systems; e) Part of NASA'S study of "Green Propellants." Hot-fire testing validated performance and functionality of thruster. Thruster's dependence on mixture ratio has been evaluated. Data has been used to calculate performance parameters such as thrust and Isp. Data has been compared with previous test results to verify reliability and repeatability. Thruster was found to have an Isp of 131 s and 82 lbf thrust at a mixture ratio of 1.62.
JetWeb: A WWW interface and database for Monte Carlo tuning and validation
NASA Astrophysics Data System (ADS)
Butterworth, J. M.; Butterworth, S.
2003-06-01
A World Wide Web interface to a Monte Carlo tuning facility is described. The aim of the package is to allow rapid and reproducible comparisons to be made between detailed measurements at high-energy physics colliders and general physics simulation packages. The package includes a relational database, a Java servlet query and display facility, and clean interfaces to simulation packages and their parameters.
Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth
2010-08-01
Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against
Sierra Structural Dynamics User's Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reese, Garth M.
2015-10-19
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a users guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munday, Lynn Brendon; Day, David M.; Bunting, Gregory
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a users guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
Chen, Po-Yi; Jan, Ya-Wen; Yang, Chien-Ming
2017-07-01
The purpose of this study was to examine whether the Insomnia Severity Index (ISI) and Pittsburgh Sleep Quality Index (PSQI) are valid outcome measures for Cognitive Behavioral Therapy for Insomnia (CBT-I). Specifically, we tested whether the factorial parameters of the ISI and the PSQI could remain invariant against CBT-I, which is a prerequisite to using their change scores as an unbiased measure of the treatment outcome of CBT-I. A clinical data set including scores on the Chinese versions of the ISI and the PSQI obtained from 114 insomnia patients prior to and after a 6-week CBT-I program in Taiwan was analyzed. A series of measurement invariance (MI) tests were conducted to compare the factorial parameters of the ISI and the PSQI before and after the CBT-I treatment program. Most factorial parameters of the ISI remained invariant after CBT-I. However, the factorial model of the PSQI changed after CBT-I treatment. An extra loading with three residual correlations was added into the factorial model after treatment. The partial strong invariance of the ISI supports that it is a valid outcome measure for CBT-I. In contrast, various changes in the factor model of the PSQI indicate that it may not be an appropriate outcome measure for CBT-I. Some possible causes for the changes of the constructs of the PSQI following CBT-I are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Romano, M.; Mays, M. L.; Taktakishvili, A.; MacNeice, P. J.; Zheng, Y.; Pulkkinen, A. A.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Modeling coronal mass ejections (CMEs) is of great interest to the space weather research and forecasting communities. We present recent validation work of real-time CME arrival time predictions at different satellites using the WSA-ENLIL+Cone three-dimensional MHD heliospheric model available at the Community Coordinated Modeling Center (CCMC) and performed by the Space Weather Research Center (SWRC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. The quality of model operation is evaluated by comparing its output to a measurable parameter of interest such as the CME arrival time and geomagnetic storm strength. The Kp index is calculated from the relation given in Newell et al. (2007), using solar wind parameters predicted by the WSA-ENLIL+Cone model at Earth. The CME arrival time error is defined as the difference between the predicted arrival time and the observed in-situ CME shock arrival time at the ACE, STEREO A, or STEREO B spacecraft. This study includes all real-time WSA-ENLIL+Cone model simulations performed between June 2011-2013 (over 400 runs) at the CCMC/SWRC. We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we show the average absolute CME arrival time error, and the dependence of this error on CME input parameters such as speed, width, and direction. We also present the predicted geomagnetic storm strength (using the Kp index) error for Earth-directed CMEs.
NASA Astrophysics Data System (ADS)
Man, Viet Hoang; Li, Mai Suan; Derreumaux, Philippe; Nguyen, Phuong H.
2018-03-01
The Rayleigh-Plesset (RP) equation was derived from the first principles to describe the bubble cavitation in liquids in terms of macroscopic hydrodynamics. A number of nonequilibrium molecular dynamics studies have been carried out to validate this equation in describing the bubble inertial cavitation, but their results are contradictory and the applicability of the RP equation still remains to be examined, especially for the stable cavitation. In this work, we carry out nonequilibrium all-atom simulation to validate the applicability of the RP equation in the description of the stable cavitation of nano-sized bubbles in water. We show that although microscopic effects are not explicitly included, this equation still describes the dynamics of subnano-bubbles quite well as long as the contributions of various terms including inertial, surface tension, and viscosity are correctly taken into account. These terms are directly and inversely proportional to the amplitude and period of the cavitation, respectively. Thus, their contributions to the RP equation depend on these two parameters. This may explain the discrepancy between the current results obtained using different parameters. Finally, the accuracy of the RP equation in the current mathematical modeling studies of the ultrasound-induced blood-brain-barrier experiments is discussed in some detail.
Fracture mechanics validity limits
NASA Technical Reports Server (NTRS)
Lambert, Dennis M.; Ernst, Hugo A.
1994-01-01
Fracture behavior is characteristics of a dramatic loss of strength compared to elastic deformation behavior. Fracture parameters have been developed and exhibit a range within which each is valid for predicting growth. Each is limited by the assumptions made in its development: all are defined within a specific context. For example, the stress intensity parameters, K, and the crack driving force, G, are derived using an assumption of linear elasticity. To use K or G, the zone of plasticity must be small as compared to the physical dimensions of the object being loaded. This insures an elastic response, and in this context, K and G will work well. Rice's J-integral has been used beyond the limits imposed on K and G. J requires an assumption of nonlinear elasticity, which is not characteristic of real material behavior, but is thought to be a reasonable approximation if unloading is kept to a minimum. As well, the constraint cannot change dramatically (typically, the crack extension is limited to ten-percent of the initial remaining ligament length). Rice, et al investigated the properties required of J-type parameters, J(sub x), and showed that the time rate, dJ(sub x)/dt, must not be a function of the crack extension rate, da/dt. Ernst devised the modified-J parameter, J(sub M), that meets this criterion. J(sub M) correlates fracture data to much higher crack growth than does J. Ultimately, a limit of the validity of J(sub M) is anticipated, and this has been estimated to be at a crack extension of about 40-percent of the initial remaining ligament length. None of the various parameters can be expected to describe fracture in an environment of gross plasticity, in which case the process is better described by deformation parameters, e.g., stress and strain. In the current study, various schemes to identify the onset of the plasticity-dominated behavior, i.e., the end of fracture mechanics validity, are presented. Each validity limit parameter is developed in detail, and then data is presented and the various schemes for establishing a limit of the validity are compared. The selected limiting parameter is applied to a set of fracture data showing the improvement of correlation gained.
Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin
2015-05-01
The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.
Godon, Alban; Genevieve, Franck; Marteau-Tessier, Anne; Zandecki, Marc
2012-01-01
Several situations lead to abnormal haemoglobin measurement or to abnormal red blood cells (RBC) counts, including hyperlipemias, agglutinins and cryoglobulins, haemolysis, or elevated white blood cells (WBC) counts. Mean (red) cell volume may be also subject to spurious determination, because of agglutinins (mainly cold), high blood glucose level, natremia, anticoagulants in excess and at times technological considerations. Abnormality related to one measured parameter eventually leads to abnormal calculated RBC indices: mean cell haemoglobin content is certainly the most important RBC parameter to consider, maybe as important as flags generated by the haematology analysers (HA) themselves. In many circumstances, several of the measured parameters from cell blood counts (CBC) may be altered, and the discovery of a spurious change on one parameter frequently means that the validity of other parameters should be considered. Sensitive flags allow now the identification of several spurious counts, but only the most sophisticated HA have optimal flagging, and simpler ones, especially those without any WBC differential scattergram, do not share the same capacity to detect abnormal results. Reticulocytes are integrated into the CBC in many HA, and several situations may lead to abnormal counts, including abnormal gating, interference with intraerythrocytic particles, erythroblastosis or high WBC counts.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context
Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076
Description of the CERES Ocean Validation Experiment (COVE), A Dedicated EOS Validation Test Site
NASA Astrophysics Data System (ADS)
Rutledge, K.; Charlock, T.; Smith, B.; Jin, Z.; Rose, F.; Denn, F.; Rutan, D.; Haeffelin, M.; Su, W.; Xhang, T.; Jay, M.
2001-12-01
A unique test site located in the mid-Atlantic coastal marine waters has been used by several EOS projects for validation measurements. A common theme across these projects is the need for a stable measurement site within the marine environment for long-term, high quality radiation measurements. The site was initiated by NASA's Clouds and the Earths Radiant Energy System (CERES) project. One of CERES's challenging goals is to provide upwelling and downwelling shortwave fluxes at several pressure altitudes within the atmosphere and at the surface. Operationally the radiative transfer model of Fu and Liou (1996, 1998), the CERES instrument measured radiances and various other EOS platform data are being used to accomplish this goal. We present here, a component of the CERES/EOS validation effort that is focused to verify and optimize the prediction algorithms for radiation parameters associated with the marine coastal and oceanic surface types of the planet. For this validation work, the CERES Ocean Validation Experiment (COVE) was developed to provide detailed high-frequency and long-duration measurements for radiation and their associated dependent variables. The CERES validations also include analytical efforts which will not be described here (but see Charlock et.al, Su et.al., Smith et.al-Fall 2001 AGU Meeting) The COVE activity is based on a rigid ocean platform which is located approximately twenty kilometers off of the coast of Virginia Beach, Virginia. The once-manned US Coast Guard facility rises 35 meters from the ocean surface allowing the radiation instruments to be well above the splash zone. The depth of the sea is eleven meters at the site. A power and communications system has been installed for present and future requirements. Scientific measurements at the site have primarily been developed within the framework of established national and international monitoring programs. These include the Baseline Surface Radiation Network of the World Meteorological Organization, NASA's robotic aerosol measurement program - AERONET, NOAA's GPS Water Vapor Demonstration Network, NOAA's National Buoy Data Center and GEWEX's Global Aerosol Climate Program. Other EOS projects have utilized the COVE platform for validation measurements (short term: MODIS, MISR intermediate term: SEAWIFS). A longer term measurement program for the AIRS instrument to be deployed on the AQUA satellite is underway. The poster will detail the unique measurement and infrastructure assets of the COVE site and present example 1.5 year time series of the major radiometric parameters. Lastly, the near term measurement augmentations that are anticipated at COVE will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Ade, Brian J; Bowman, Stephen M
2015-01-01
Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission have initiated a multiyear project to investigate application of burnup credit for boiling-water reactor (BWR) fuel in storage and transportation casks. This project includes two phases. The first phase (1) investigates applicability of peak reactivity methods currently used in spent fuel pools (SFPs) to storage and transportation systems and (2) evaluates validation of both reactivity (k eff) calculations and burnup credit nuclide concentrations within these methods. The second phase will focus on extending burnup credit beyond peak reactivity. This paper documents the first phase, including an analysis of latticemore » design parameters and depletion effects, as well as both validation components. Initial efforts related to extended burnup credit are discussed in a companion paper. Peak reactivity analyses have been used in criticality analyses for licensing of BWR fuel in SFPs over the last 20 years. These analyses typically combine credit for the gadolinium burnable absorber present in the fuel with a modest amount of burnup credit. Gadolinium burnable absorbers are used in BWR assemblies to control core reactivity. The burnable absorber significantly reduces assembly reactivity at beginning of life, potentially leading to significant increases in assembly reactivity for burnups less than 15–20 GWd/MTU. The reactivity of each fuel lattice is dependent on gadolinium loading. The number of gadolinium-bearing fuel pins lowers initial lattice reactivity, but it has a small impact on the burnup and reactivity of the peak. The gadolinium concentration in each pin has a small impact on initial lattice reactivity but a significant effect on the reactivity of the peak and the burnup at which the peak occurs. The importance of the lattice parameters and depletion conditions are primarily determined by their impact on the gadolinium depletion. Criticality code validation for BWR burnup credit at peak reactivity requires a different set of experiments than for pressurized-water reactor burnup credit analysis because of differences in actinide compositions, presence of residual gadolinium absorber, and lower fission product concentrations. A survey of available critical experiments is presented along with a sample criticality code validation and determination of undercoverage penalties for some nuclides. The validation of depleted fuel compositions at peak reactivity presents many challenges which largely result from a lack of radiochemical assay data applicable to BWR fuel in this burnup range. In addition, none of the existing low burnup measurement data include residual gadolinium measurements. An example bias and uncertainty associated with validation of actinide-only fuel compositions is presented.« less
Nonclinical dose formulation analysis method validation and sample analysis.
Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D
2010-12-01
Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.
Adaptive firefly algorithm: parameter analysis and its application.
Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin
2014-01-01
As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.
Adaptive Firefly Algorithm: Parameter Analysis and its Application
Shen, Hong-Bin
2014-01-01
As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm — adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812
Musci, Marilena; Yao, Shicong
2017-12-01
Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.
An accurate analytic description of neutrino oscillations in matter
NASA Astrophysics Data System (ADS)
Akhmedov, E. Kh.; Niro, Viviana
2008-12-01
A simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile is derived. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. The expansion parameter is small when the density changes relatively slowly along the neutrino path and/or neutrino energy is not very close to the Mikheyev-Smirnov-Wolfenstein (MSW) resonance energy. Our approximation is not equivalent to the adiabatic approximation and actually goes beyond it. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth. It is shown that by combining the results obtained from the expansions valid below and above the MSW resonance one can obtain a very good description of neutrino oscillations in matter in the entire energy range, including the resonance region.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Sensor data validation and reconstruction. Phase 1: System architecture study
NASA Technical Reports Server (NTRS)
1991-01-01
The sensor validation and data reconstruction task reviewed relevant literature and selected applicable validation and reconstruction techniques for further study; analyzed the selected techniques and emphasized those which could be used for both validation and reconstruction; analyzed Space Shuttle Main Engine (SSME) hot fire test data to determine statistical and physical relationships between various parameters; developed statistical and empirical correlations between parameters to perform validation and reconstruction tasks, using a computer aided engineering (CAE) package; and conceptually designed an expert system based knowledge fusion tool, which allows the user to relate diverse types of information when validating sensor data. The host hardware for the system is intended to be a Sun SPARCstation, but could be any RISC workstation with a UNIX operating system and a windowing/graphics system such as Motif or Dataviews. The information fusion tool is intended to be developed using the NEXPERT Object expert system shell, and the C programming language.
Trujillo, William A.; Sorenson, Wendy R.; La Luzerne, Paul; Austad, John W.; Sullivan, Darryl
2008-01-01
The presence of aristolochic acid in some dietary supplements is a concern to regulators and consumers. A method has been developed, by initially using a reference method as a guide, during single laboratory validation (SLV) for the determination of aristolochic acid I, also known as aristolochic acid A, in botanical species and dietary supplements at concentrations of approximately 2 to 32 μg/g. Higher levels were determined by dilution to fit the standard curve. Through the SLV, the method was optimized for quantification by liquid Chromatography with ultraviolet detection (LC-UV) and LC/mass Spectrometry (MS) confirmation. The test samples were extracted with organic solvent and water, then injected on a reverse phase LC column. Quantification was achieved with linear regression using a laboratory automation system. The SLV study included systematically optimizing the LC-UV method with regard to test sample size, fine grinding of solids, and solvent extraction efficiency. These parameters were varied in increments (and in separate optimization studies), in order to ensure that each parameter was individually studied; the test results include corresponding tables of parameter variations. In addition, the chromatographic conditions were optimized with respect to injection volume and detection wavelength. Precision studies produced overall relative standard deviation values from 2.44 up to 8.26% for aristolochic acid I. Mean recoveries were between 100 and 103% at the 2 μg/g level, between 102 and 103% at the 10 μg/g level, and 104% at the 30 μg/g level. PMID:16915829
Trujillo, William A; Sorenson, Wendy R; La Luzerne, Paul; Austad, John W; Sullivan, Darryl
2006-01-01
The presence of aristolochic acid in some dietary supplements is a concern to regulators and consumers. A method has been developed, by initially using a reference method as a guide, during single laboratory validation (SLV) for the determination of aristolochic acid I, also known as aristolochic acid A, in botanical species and dietary supplements at concentrations of approximately 2 to 32 microg/g. Higher levels were determined by dilution to fit the standard curve. Through the SLV, the method was optimized for quantification by liquid chromatography with ultraviolet detection (LC-UV) and LC/mass spectrometry (MS) confirmation. The test samples were extracted with organic solvent and water, then injected on a reverse phase LC column. Quantification was achieved with linear regression using a laboratory automation system. The SLV study included systematically optimizing the LC-UV method with regard to test sample size, fine grinding of solids, and solvent extraction efficiency. These parameters were varied in increments (and in separate optimization studies), in order to ensure that each parameter was individually studied; the test results include corresponding tables of parameter variations. In addition, the chromatographic conditions were optimized with respect to injection volume and detection wavelength. Precision studies produced overall relative standard deviation values from 2.44 up to 8.26% for aristolochic acid I. Mean recoveries were between 100 and 103% at the 2 microg/g level, between 102 and 103% at the 10 microg/g level, and 104% at the 30 microg/g level.
Park, Young-Jae; Lee, Jin-Moo; Yoo, Seung-Yeon; Park, Young-Bae
2016-04-01
To examine whether color parameters of tongue inspection (TI) using a digital camera was reliable and valid, and to examine which color parameters serve as predictors of symptom patterns in terms of East Asian medicine (EAM). Two hundred female subjects' tongue substances were photographed by a mega-pixel digital camera. Together with the photographs, the subjects were asked to complete Yin deficiency, Phlegm pattern, and Cold-Heat pattern questionnaires. Using three sets of digital imaging software, each digital image was exposure- and white balance-corrected, and finally L* (luminance), a* (red-green balance), and b* (yellow-blue balance) values of the tongues were calculated. To examine intra- and inter-rater reliabilities and criterion validity of the color analysis method, three raters were asked to calculate color parameters for 20 digital image samples. Finally, four hierarchical regression models were formed. Color parameters showed good or excellent reliability (0.627-0.887 for intra-class correlation coefficients) and significant criterion validity (0.523-0.718 for Spearman's correlation). In the hierarchical regression models, age was a significant predictor of Yin deficiency (β = 0.192), and b* value of the tip of the tongue was a determinant predictor of Yin deficiency, Phlegm, and Heat patterns (β = - 0.212, - 0.172, and - 0.163). Luminance (L*) was predictive of Yin deficiency (β = -0.172) and Cold (β = 0.173) pattern. Our results suggest that color analysis of the tongue using the L*a*b* system is reliable and valid, and that color parameters partially serve as symptom pattern predictors in EAM practice.
The Second SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-2)
NASA Technical Reports Server (NTRS)
2005-01-01
Eight international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a variety of laboratory standards. The field samples were collected primarily from eutrophic waters, although mesotrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.3 25.8 mg m-3). The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) an evaluation of spectrophotometric versus HPLC uncertainties in the determination of total chlorophyll a; and c) the reduction in uncertainties as a result of applying quality assurance (QA) procedures associated with extraction, separation, injection, degradation, detection, calibration, and reporting (particularly limits of detection and quantitation). In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied. The culmination of the activity was a validation of the round-robin methodology plus the development of the requirements for validating an individual HPLC method. The validation process includes the measurements required to initially demonstrate a pigment is validated, and the measurements that must be made during sample analysis to confirm a method remains validated. The so-called performance-based metrics developed here describe a set of thresholds for a variety of easily-measured parameters with a corresponding set of performance categories. The aggregate set of performance parameters and categories establish a) the overall performance capability of the method, and b) whether or not the capability is consistent with the required accuracy objectives.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.
Confirming the Lanchestrian linear-logarithmic model of attrition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and finalmore » force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.« less
NASA Astrophysics Data System (ADS)
Ulrich, J. C.; Guilhen, S. N.; Cotrim, M. E. B.; Pires, M. A. F.
2018-03-01
IPEN’s research reactor, IEA-R1, an open pool type research reactor moderated and cooled by light water. High quality water is a key factor in preventing the corrosion of the spent fuel stored in the pool. Leaching of radionuclides from the corroded fuel cladding may be prevented by an efficient water treatment and purification system. However, as a safety management policy, IPEN has adopted a water chemistry control which periodically monitors the levels of uranium (U) and silicon (Si) in the pool’s reactor, since IEA-R1 employs U3Si2-Al dispersion fuel. An analytical method was developed and validated for the determination of uranium and silicon by ICP OES. This work describes the validation process, in a context of quality assurance, including the parameters selectivity, linearity, quantification limit, precision and recovery.
Viability of Cross-Flow Fan with Helical Blades for Vertical Take-off and Landing Aircraft
2012-09-01
fluid dynamics (CFD) software, ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental results...computational fluid dynamics software (CFD), ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental...37 B. SIZING PARAMETERS AND ILLUSTRATION ................................. 37 APPENDIX B. ANSYS CFX PARAMETERS
Zhang, Wei; Shmuylovich, Leonid; Kovacs, Sandor J
2009-01-01
Using a simple harmonic oscillator model (PDF formalism), every early filling E-wave can be uniquely described by a set of parameters, (x(0), c, and k). Parameter c in the PDF formalism is a damping or relaxation parameter that measures the energy loss during the filling process. Based on Bernoulli's equation and kinematic modeling, we derived a causal correlation between the relaxation parameter c in the PDF formalism and a feature of the pressure contour during filling - the pressure recovery ratio defined by the left ventricular pressure difference between diastasis and minimum pressure, normalized to the pressure difference between a fiducial pressure and minimum pressure [PRR = (P(Diastasis)-P(Min))/(P(Fiducial)-P(Min))]. We analyzed multiple heart beats from one human subject to validate the correlation. Further validation among more patients is warranted. PRR is the invasive causal analogue of the noninvasive E-wave relaxation parameter c. PRR has the potential to be calculated using automated methodology in the catheterization lab in real time.
Verification and Validation of Residual Stresses in Bi-Material Composite Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy
Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less
Validating a driving simulator using surrogate safety measures.
Yan, Xuedong; Abdel-Aty, Mohamed; Radwan, Essam; Wang, Xuesong; Chilakapati, Praveen
2008-01-01
Traffic crash statistics and previous research have shown an increased risk of traffic crashes at signalized intersections. How to diagnose safety problems and develop effective countermeasures to reduce crash rate at intersections is a key task for traffic engineers and researchers. This study aims at investigating whether the driving simulator can be used as a valid tool to assess traffic safety at signalized intersections. In support of the research objective, this simulator validity study was conducted from two perspectives, a traffic parameter (speed) and a safety parameter (crash history). A signalized intersection with as many important features (including roadway geometries, traffic control devices, intersection surroundings, and buildings) was replicated into a high-fidelity driving simulator. A driving simulator experiment with eight scenarios at the intersection were conducted to determine if the subjects' speed behavior and traffic risk patterns in the driving simulator were similar to what were found at the real intersection. The experiment results showed that speed data observed from the field and in the simulator experiment both follow normal distributions and have equal means for each intersection approach, which validated the driving simulator in absolute terms. Furthermore, this study used an innovative approach of using surrogate safety measures from the simulator to contrast with the crash analysis for the field data. The simulator experiment results indicated that compared to the right-turn lane with the low rear-end crash history record (2 crashes), subjects showed a series of more risky behaviors at the right-turn lane with the high rear-end crash history record (16 crashes), including higher deceleration rate (1.80+/-1.20 m/s(2) versus 0.80+/-0.65 m/s(2)), higher non-stop right-turn rate on red (81.67% versus 57.63%), higher right-turn speed as stop line (18.38+/-8.90 km/h versus 14.68+/-6.04 km/h), shorter following distance (30.19+/-13.43 m versus 35.58+/-13.41 m), and higher rear-end probability (9/59=0.153 versus 2/60=0.033). Therefore, the relative validity of driving simulator was well established for the traffic safety studies at signalized intersections.
Suwarto, Suhendro; Hidayat, Mohammad Jauharsyah; Widjaya, Bing
2018-02-23
The Dengue Score is a model for predicting pleural effusion and/or ascites and uses the hematocrit (Hct), albumin concentration, platelet count and aspartate aminotransferase (AST) ratio as independent variables. As this metric has not been validated, we conducted a study to validate the Dengue Score and assess its clinical application. A retrospective study was performed at a private hospital in Jakarta, Indonesia. Patients with dengue infection hospitalized from January 2011 through March 2016 were included. The Dengue Score was calculated using four parameters: Hct increase≥15.1%, serum albumin≤3.49 mg/dL, platelet count≤49,500/μL and AST ratio ≥ 2.51. Each parameter was scored as 1 if present and 0 if absent. To validate the Dengue Score, goodness-of-fit was used to assess calibration, and the area under the receiver operating characteristic curve (AROC) was used to assess discrimination. Associations between clinical parameters and Dengue Score groups were determined by bivariate analysis. A total of 207 patients were included in this study. The calibration of the Dengue Score was acceptable (Hosmer-Lemeshow test, p = 0.11), and the score's discriminative ability was good (AROC = 0.88 (95% CI: 0.83-0.92)). At a cutoff of ≥2, the Dengue Score had a positive predictive value (PPV) of 79.03% and a negative predictive value (NPV) of 90.36% for the diagnostic prediction of pleural effusion and/or ascites. Compared with the Dengue Score ≤ 1 group, the Dengue Score = 2 group was significantly associated with hemoconcentration> 20% (p = 0.029), severe thrombocytopenia (p = 0.029), and increased length of hospital stay (p = 0.003). Compared with the Dengue Score = 2 group, the Dengue Score ≥ 3 group was significantly associated with hemoconcentration> 20% (p = 0.001), severe thrombocytopenia (p = 0.024), severe dengue (p = 0.039), and increased length of hospital stay (p = 0.011). The Dengue Score performed well and can be used in daily practice to help clinicians identify patients who have plasma leakage associated with severe dengue.
Zacharis, Constantinos K; Vastardi, Elli
2018-02-20
In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.
Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste
2009-12-21
The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.
Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.
Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil
2014-08-20
In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.
A Holistic approach to assess older adults' wellness using e-health technologies.
Thompson, Hilaire J; Demiris, George; Rue, Tessa; Shatil, Evelyn; Wilamowska, Katarzyna; Zaslavsky, Oleg; Reeder, Blaine
2011-12-01
To date, methodologies are lacking that address a holistic assessment of wellness in older adults. Technology applications may provide a platform for such an assessment, but have not been validated. We set out to demonstrate whether e-health applications could support the assessment of older adults' wellness in community-dwelling older adults. Twenty-seven residents of independent retirement community were followed over 8 weeks. Subjects engaged in the use of diverse technologies to assess cognitive performance, physiological and functional variables, as well as psychometric components of wellness. Data were integrated from various e-health sources into one study database. Correlations were assessed between different parameters, and hierarchical cluster analysis was used to explore the validity of the wellness model. We found strong associations across multiple parameters of wellness within the conceptual model, including cognitive, functional, and physical. However, spirituality did not correlate with any other parameter studied in contrast to prior studies of older adults. Participants expressed overall positive attitudes toward the e-health tools and the holistic approach to the assessment of wellness, without expressing any privacy concerns. Parameters were highly correlated across multiple domains of wellness. Important clusters were noted to be formed across cognitive and physiological domains, giving further evidence of need for an integrated approach to the assessment of wellness. This finding warrants further replication in larger and more diverse samples of older adults to standardize and deploy these technologies across population groups.
NASA Astrophysics Data System (ADS)
Ionita, Ciprian N.; Bednarek, Daniel R.; Rudin, Stephen
2012-03-01
Intracranial aneurysm treatment with flow diverters (FD) is a new minimally invasive approach, recently approved for use in human patients. Attempts to correlate the flow reduction observed in angiograms with a parameter related to the FD structure have not been totally successful. To find the proper parameter, we investigated four porous-media flow models. The models describing the relation between the pressure drop and flow velocity that are investigated include the capillary theory linear model (CTLM), the drag force linear model (DFLM), the simple quadratic model (SQM) and the modified quadratic model (MQM). Proportionality parameters are referred to as permeability for the linear models and resistance for the quadratic ones. A two stage experiment was performed. First, we verified flow model validity by placing six different stainless-steel meshes, resembling FD structures, in known flow conditions. The best flow model was used for the second stage, where six different FD's were inserted in aneurysm phantoms and flow modification was estimated using angiographically derived time density curves (TDC). Finally, TDC peak variation was compared with the FD parameter. Model validity experiments indicated errors of: 70% for the linear models, 26% for the SQM and 7% for the MQM. The resistance calculated according to the MQM model correlated well with the contrast flow reduction. Results indicate that resistance calculated according to MQM is appropriate to characterize the FD and could explain the flow modification observed in angiograms.
Vancomycin Dosing in Obese Patients: Special Considerations and Novel Dosing Strategies.
Durand, Cheryl; Bylo, Mary; Howard, Brian; Belliveau, Paul
2018-06-01
To review the literature regarding vancomycin pharmacokinetics in obese patients and strategies used to improve dosing in this population. PubMed, EMBASE (1974 to November 2017), and Google Scholar searches were conducted using the search terms vancomycin, obese, obesity, pharmacokinetics, strategy, and dosing. Additional articles were selected from reference lists of selected studies. Included articles were those published in English with a primary focus on vancomycin pharmacokinetic parameters in obese patients and practical vancomycin dosing strategies, clinical experiences, or challenges of dosing vancomycin in this population. Volume of distribution and clearance are the pharmacokinetic parameters that most often affect vancomycin dosing in obese patients; both are increased in this population. Challenges with dosing in obese patients include inconsistent and inadequate dosing, observations that the obese population may not be homogeneous, and reports of an increased likelihood of supratherapeutic trough concentrations. Investigators have revised and developed dosing and monitoring protocols to address these challenges. These approaches improved target trough attainment to varying degrees. Some of the vancomycin dosing approaches provided promising results in obese patients, but there were notable differences in methods used to develop these approaches, and sample sizes were small. Although some approaches can be considered for validation in individual institutions, further research is warranted. This may include validating approaches in larger populations with narrower obesity severity ranges, investigating target attainment in indication-specific target ranges, and evaluating the impact of different dosing weights and methods of creatinine clearance calculation.
BMI curves for preterm infants.
Olsen, Irene E; Lawson, M Louise; Ferguson, A Nicole; Cantrell, Rebecca; Grabich, Shannon C; Zemel, Babette S; Clark, Reese H
2015-03-01
Preterm infants experience disproportionate growth failure postnatally and may be large weight for length despite being small weight for age by hospital discharge. The objective of this study was to create and validate intrauterine weight-for-length growth curves using the contemporary, large, racially diverse US birth parameters sample used to create the Olsen weight-, length-, and head-circumference-for-age curves. Data from 391 681 US infants (Pediatrix Medical Group) born at 22 to 42 weeks' gestational age (born in 1998-2006) included birth weight, length, and head circumference, estimated gestational age, and gender. Separate subsamples were used to create and validate curves. Established methods were used to determine the weight-for-length ratio that was most highly correlated with weight and uncorrelated with length. Final smoothed percentile curves (3rd to 97th) were created by the Lambda Mu Sigma (LMS) method. The validation sample was used to confirm results. The final sample included 254 454 singleton infants (57.2% male) who survived to discharge. BMI was the best overall weight-for-length ratio for both genders and a majority of gestational ages. Gender-specific BMI-for-age curves were created (n = 127 446) and successfully validated (n = 126 988). Mean z scores for the validation sample were ∼0 (∼1 SD). BMI was different across gender and gestational age. We provide a set of validated reference curves (gender-specific) to track changes in BMI for prematurely born infants cared for in the NICU for use with weight-, length-, and head-circumference-for-age intrauterine growth curves. Copyright © 2015 by the American Academy of Pediatrics.
Thermal performance evaluation of the infrared telescope dewar subsystem
NASA Technical Reports Server (NTRS)
Urban, E. W.
1986-01-01
Thermal performance evaluations (TPE) were conducted with the superfluid helium dewar of the Infrared Telescope (IRT) experiment from November 1981 to August 1982. Test included measuring key operating parameters, simulating operations with an attached instrument cryostat and validating servicing, operating and safety procedures. Test activities and results are summarized. All objectives are satisfied except for those involving transfer of low pressure liquid helium (LHe) from a supply dewar into the dewar subsystem.
van der Meer, Adriaan J; Hansen, Bettina E; Fattovich, Giovanna; Feld, Jordan J; Wedemeyer, Heiner; Dufour, Jean-François; Lammert, Frank; Duarte-Rojo, Andres; Manns, Michael P; Ieluzzi, Donatella; Zeuzem, Stefan; Hofmann, W Peter; de Knegt, Robert J; Veldt, Bart J; Janssen, Harry L A
2015-02-01
Reliable tools to predict long-term outcome among patients with well compensated advanced liver disease due to chronic HCV infection are lacking. Risk scores for mortality and for cirrhosis-related complications were constructed with Cox regression analysis in a derivation cohort and evaluated in a validation cohort, both including patients with chronic HCV infection and advanced fibrosis. In the derivation cohort, 100/405 patients died during a median 8.1 (IQR 5.7-11.1) years of follow-up. Multivariate Cox analyses showed age (HR=1.06, 95% CI 1.04 to 1.09, p<0.001), male sex (HR=1.91, 95% CI 1.10 to 3.29, p=0.021), platelet count (HR=0.91, 95% CI 0.87 to 0.95, p<0.001) and log10 aspartate aminotransferase/alanine aminotransferase ratio (HR=1.30, 95% CI 1.12 to 1.51, p=0.001) were independently associated with mortality (C statistic=0.78, 95% CI 0.72 to 0.83). In the validation cohort, 58/296 patients with cirrhosis died during a median of 6.6 (IQR 4.4-9.0) years. Among patients with estimated 5-year mortality risks <5%, 5-10% and >10%, the observed 5-year mortality rates in the derivation cohort and validation cohort were 0.9% (95% CI 0.0 to 2.7) and 2.6% (95% CI 0.0 to 6.1), 8.1% (95% CI 1.8 to 14.4) and 8.0% (95% CI 1.3 to 14.7), 21.8% (95% CI 13.2 to 30.4) and 20.9% (95% CI 13.6 to 28.1), respectively (C statistic in validation cohort = 0.76, 95% CI 0.69 to 0.83). The risk score for cirrhosis-related complications also incorporated HCV genotype (C statistic = 0.80, 95% CI 0.76 to 0.83 in the derivation cohort; and 0.74, 95% CI 0.68 to 0.79 in the validation cohort). Prognosis of patients with chronic HCV infection and compensated advanced liver disease can be accurately assessed with risk scores including readily available objective clinical parameters. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
You, Benoit; Deng, Wei; Hénin, Emilie; Oza, Amit; Osborne, Raymond
2016-01-01
In low-risk gestational trophoblastic neoplasia, chemotherapy effect is monitored and adjusted with serum human chorionic gonadotrophin (hCG) levels. Mathematical modeling of hCG kinetics may allow prediction of methotrexate (MTX) resistance, with production parameter "hCGres." This approach was evaluated using the GOG-174 (NRG Oncology/Gynecologic Oncology Group-174) trial database, in which weekly MTX (arm 1) was compared with dactinomycin (arm 2). Database (210 patients, including 78 with resistance) was split into 2 sets. A 126-patient training set was initially used to estimate model parameters. Patient hCG kinetics from days 7 to 45 were fit to: [hCG(time)] = hCG7 * exp(-k * time) + hCGres, where hCGres is residual hCG tumor production, hCG7 is the initial hCG level, and k is the elimination rate constant. Receiver operating characteristic (ROC) analyses defined putative hCGRes predictor of resistance. An 84-patient test set was used to assess prediction validity. The hCGres was predictive of outcome in both arms, with no impact of treatment arm on unexplained variability of kinetic parameter estimates. The best hCGres cutoffs to discriminate resistant versus sensitive patients were 7.7 and 74.0 IU/L in arms 1 and 2, respectively. By combining them, 2 predictive groups were defined (ROC area under the curve, 0.82; sensitivity, 93.8%; specificity, 70.5%). The predictive value of hCGres-based groups regarding resistance was reproducible in test set (ROC area under the curve, 0.81; sensitivity, 88.9%; specificity, 73.1%). Both hCGres and treatment arm were associated with resistance by logistic regression analysis. The early predictive value of the modeled kinetic parameter hCGres regarding resistance seems promising in the GOG-174 study. This is the second positive evaluation of this approach. Prospective validation is warranted.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Bias in error estimation when using cross-validation for model selection.
Varma, Sudhir; Simon, Richard
2006-02-23
Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
Experimental and modeling uncertainties in the validation of lower hybrid current drive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poli, F. M.; Bonoli, P. T.; Chilenski, M.
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
Experimental and modeling uncertainties in the validation of lower hybrid current drive
Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...
2016-07-28
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
Modelling exploration of non-stationary hydrological system
NASA Astrophysics Data System (ADS)
Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei
2015-04-01
Traditional hydrological modelling assumes that the catchment does not change with time (i.e., stationary conditions) which means the model calibrated for the historical period is valid for the future period. However, in reality, due to change of climate and catchment conditions this stationarity assumption may not be valid in the future. It is a challenge to make the hydrological model adaptive to the future climate and catchment conditions that are not observable at the present time. In this study a lumped conceptual rainfall-runoff model called IHACRES was applied to a catchment in southwest England. Long observation data from 1961 to 2008 were used and seasonal calibration (in this study only summer period is further explored because it is more sensitive to climate and land cover change than the other three seasons) has been done since there are significant seasonal rainfall patterns. We expect that the model performance can be improved by calibrating the model based on individual seasons. The data is split into calibration and validation periods with the intention of using the validation period to represent the future unobserved situations. The success of the non-stationary model will depend not only on good performance during the calibration period but also the validation period. Initially, the calibration is based on changing the model parameters with time. Methodology is proposed to adapt the parameters using the step forward and backward selection schemes. However, in the validation both the forward and backward multiple parameter changing models failed. One problem is that the regression with time is not reliable since the trend may not be in a monotonic linear relationship with time. The second issue is that changing multiple parameters makes the selection process very complex which is time consuming and not effective in the validation period. As a result, two new concepts are explored. First, only one parameter is selected for adjustment while the other parameters are set as constant. Secondly, regression is made against climate condition instead of against time. It has been found that such a new approach is very effective and this non-stationary model worked very well both in the calibration and validation period. Although the catchment is specific in southwest England and the data are for only the summer period, the methodology proposed in this study is general and applicable to other catchments. We hope this study will stimulate the hydrological community to explore a variety of sites so that valuable experiences and knowledge could be gained to improve our understanding of such a complex modelling issue in climate change impact assessment.
Reijnierse, Esmee M.; Trappenburg, Marijke C.; Leter, Morena J.; Blauw, Gerard Jan; de van der Schueren, Marian A. E.; Meskers, Carel G. M.; Maier, Andrea B.
2015-01-01
Objectives Diagnostic criteria for sarcopenia include measures of muscle mass, muscle strength and physical performance. Consensus on the definition of sarcopenia has not been reached yet. To improve insight into the most clinically valid definition of sarcopenia, this study aimed to compare the association between parameters of malnutrition, as a risk factor in sarcopenia, and diagnostic measures of sarcopenia in geriatric outpatients. Material and Methods This study is based on data from a cross-sectional study conducted in a geriatric outpatient clinic including 185 geriatric outpatients (mean age 82 years). Parameters of malnutrition included risk of malnutrition (assessed by the Short Nutritional Assessment Questionnaire), loss of appetite, unintentional weight loss and underweight (body mass index <22 kg/m2). Diagnostic measures of sarcopenia included relative muscle mass (lean mass and appendicular lean mass [ALM] as percentages), absolute muscle mass (total lean mass and ALM/height2), handgrip strength and walking speed. All diagnostic measures of sarcopenia were standardized. Associations between parameters of malnutrition (independent variables) and diagnostic measures of sarcopenia (dependent variables) were analysed using multivariate linear regression models adjusted for age, body mass, fat mass and height in separate models. Results None of the parameters of malnutrition was consistently associated with diagnostic measures of sarcopenia. The strongest associations were found for both relative and absolute muscle mass; less stronger associations were found for muscle strength and physical performance. Underweight (p = <0.001) and unintentional weight loss (p = 0.031) were most strongly associated with higher lean mass percentage after adjusting for age. Loss of appetite (p = 0.003) and underweight (p = 0.021) were most strongly associated with lower total lean mass after adjusting for age and fat mass. Conclusion Parameters of malnutrition relate differently to diagnostic measures of sarcopenia in geriatric outpatients. The association between parameters of malnutrition and diagnostic measures of sarcopenia was strongest for both relative and absolute muscle mass, while less strong associations were found with muscle strength and physical performance. PMID:26284368
NASA Astrophysics Data System (ADS)
Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim
2018-02-01
We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Center of pressure based segment inertial parameters validation
Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane
2017-01-01
By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
NASA Astrophysics Data System (ADS)
Adam, Saad; Premnath, Kannan
2016-11-01
Fluid mechanics of non-Newtonian fluids, which arise in numerous settings, are characterized by non-linear constitutive models that pose certain unique challenges for computational methods. Here, we consider the lattice Boltzmann method (LBM), which offers some computational advantages due to its kinetic basis and its simpler stream-and-collide procedure enabling efficient simulations. However, further improvements are necessary to improve its numerical stability and accuracy for computations involving broader parameter ranges. Hence, in this study, we extend the cascaded LBM formulation by modifying its moment equilibria and relaxation parameters to handle a variety of non-Newtonian constitutive equations, including power-law and Bingham fluids, with improved stability. In addition, we include corrections to the moment equilibria to obtain an inertial frame invariant scheme without cubic-velocity defects. After preforming its validation study for various benchmark flows, we study the physics of non-Newtonian flow over pairs of circular and square cylinders in a tandem arrangement, especially the wake structure interactions and their effects on resulting forces in each cylinder, and elucidate the effect of the various characteristic parameters.
A technique for computation of noise temperature due to a beam waveguide shroud
NASA Technical Reports Server (NTRS)
Veruttipong, W.; Franco, M. M.
1993-01-01
Direct analytical computation of the noise temperature of real beam waveguide (BWG) systems, including all mirrors and the surrounding shroud, is an extremely complex problem and virtually impossible to achieve. Yet the DSN antennas are required to be ultra low-noise in order to be effective, and a reasonably accurate prediction is essential. This article presents a relatively simple technique to compute a real BWG system noise temperature by combining analytical techniques with data from experimental tests. Specific expressions and parameters for X-band (8.45-GHz) BWG noise computation are obtained for DSS 13 and DSS 24, now under construction. These expressions are also valid for various conditions of the BWG feed systems, including horn sizes and positions, and mirror sizes, curvatures, and positions. Parameters for S- and Ka-bands (2.3 and 32.0 GHz) have not been determined; however, those can be obtained following the same procedure as for X-band.
NASA Astrophysics Data System (ADS)
Pelevin, V.; Rostovtseva, V.
Falling of rivers into the seas or surging in shallow aquatoria cause the violation of the balance between living and dead matter occurring in the open ocean ( Pelevin and Rostovtseva, 2001). That means in littoral arias the one-parameter model of sea waters optical properties developed for the open ocean (Pelevin and Rostovtseva, 1997) is not valid. We suggest to use the three-parameters model of light scattering and absorbing prop- erties of sea water for the most arias on shelves. The three parameters are: the coeffi- cient of light absorption by coloured matter at 500 nm (coloured matter includes both chlorophyll pigments and "yellow substance"), the coefficient of light absorption by suspended matter and the coefficient of light backscattering by suspended matter. For some specific shelf arias with coloured suspended matter we suggest to add the fourth parameter taking into account the spectral dependence of backscattering by suspended matter. The method of such type arias determination is also given. The algorithm of solution of the inverse problem of these parameters estimation using optical remote sensing data obtained from satellites is developed. It consists of two steps: the rough determination of the parameters values by some spectral characteris- tics and then the minimization of real and model spectra discrepancy. The suggested algorithm was used for spectral distribution of upward radiation mea- sured in the Black, Marmora and Baltic Seas. Comparison of the obtained results with some data of direct measurements carried out in these aquatoria proved the validity of the model for these shelf waters and showed the efficiency of the suggested approach. V.N.Pelevin and V.V.Rostovtseva , 1997, Estimation of lightscattering and lightabsorb- ing admixture concentration in open ocean waters of different types.- Atmospheric and Oceanic Optics, 10(9), 989-995. V.N.Pelevin and V.V.Rostovtseva, 2001, Modelling of optic- biological parameters of open ocean waters. - OCEANOLOGIA, 43(4).
Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia
2018-01-01
Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139
Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia
2018-01-01
Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.
Experimental validation of the RATE tool for inferring HLA restrictions of T cell epitopes.
Paul, Sinu; Arlehamn, Cecilia S Lindestam; Schulten, Veronique; Westernberg, Luise; Sidney, John; Peters, Bjoern; Sette, Alessandro
2017-06-21
The RATE tool was recently developed to computationally infer the HLA restriction of given epitopes from immune response data of HLA typed subjects without additional cumbersome experimentation. Here, RATE was validated using experimentally defined restriction data from a set of 191 tuberculosis-derived epitopes and 63 healthy individuals with MTB infection from the Western Cape Region of South Africa. Using this experimental dataset, the parameters utilized by the RATE tool to infer restriction were optimized, which included relative frequency (RF) of the subjects responding to a given epitope and expressing a given allele as compared to the general test population and the associated p-value in a Fisher's exact test. We also examined the potential for further optimization based on the predicted binding affinity of epitopes to potential restricting HLA alleles, and the absolute number of individuals expressing a given allele and responding to the specific epitope. Different statistical measures, including Matthew's correlation coefficient, accuracy, sensitivity and specificity were used to evaluate performance of RATE as a function of these criteria. Based on our results we recommend selection of HLA restrictions with cutoffs of p-value < 0.01 and RF ≥ 1.3. The usefulness of the tool was demonstrated by inferring new HLA restrictions for epitope sets where restrictions could not be experimentally determined due to lack of necessary cell lines and for an additional data set related to recognition of pollen derived epitopes from allergic patients. Experimental data sets were used to validate RATE tool and the parameters used by the RATE tool to infer restriction were optimized. New HLA restrictions were identified using the optimized RATE tool.
Validation of Satellite Derived Cloud Properties Over the Southeastern Pacific
NASA Astrophysics Data System (ADS)
Ayers, J.; Minnis, P.; Zuidema, P.; Sun-Mack, S.; Palikonda, R.; Nguyen, L.; Fairall, C.
2005-12-01
Satellite measurements of cloud properties and the radiation budget are essential for understanding meso- and large-scale processes that determine the variability in climate over the southeastern Pacific. Of particular interest in this region is the prevalent stratocumulus cloud deck. The stratocumulus albedos are directly related to cloud microphysical properties that need to be accurately characterized in Global Climate Models (GCMs) to properly estimate the Earth's radiation budget. Meteorological observations in this region are sparse causing large uncertainties in initialized model fields. Remote sensing from satellites can provide a wealth of information about the clouds in this region, but it is vital to validate the remotely sensed parameters and to understand their relationship to other parameters that are not directly observed by the satellites. The variety of measurements from the R/V Roger Revelle during the 2003 STRATUS cruise and from the R/V Ron Brown during EPIC 2001 and the 2004 STRATUS cruises are suitable for validating and improving the interpretation of the satellite derived cloud properties. In this study, satellite-derived cloud properties including coverage, height, optical depth, and liquid water path are compared with in situ measurements taken during the EPIC and STRATUS cruises. The remotely sensed values are derived from Geostationary Operational Environmental Satellite (GOES) imager data, Moderate Resolution Imaging Spectroradiometer (MODIS) data from the Terra and Aqua satellites, and from the Visible and Infrared Scanner (VIRS) aboard the Tropical Rainfall Measuring Mission (TRMM) satellite. The products from this study will include regional monthly cloud climatologies derived from the GOES data for the 2003 and 2004 cruises as well as micro and macro physical cloud property retrievals centered over the ship tracks from MODIS and VIRS.
Sailer, Verena; Gevensleben, Heidrun; Dietrich, Joern; Goltz, Diane; Kristiansen, Glen; Bootz, Friedrich; Dietrich, Dimo
2017-01-01
Despite advances in combined modality therapy, outcomes in head and neck squamous cell cancer (HNSCC) remain dismal with five-year overall survival rates of less than 50%. Prognostic biomarkers are urgently needed to identify patients with a high risk of death after initial curative treatment. Methylation status of the paired-like homeodomain transcription factor 2 (PITX2) has recently emerged as a powerful prognostic biomarker in various cancers. In the present study, the clinical performance of PITX2 methylation was validated in a HNSCC cohort by means of an independent analytical platform (Infinium HumanMethylation450 BeadChip, Illumina, Inc.). A total of 528 HNSCC patients from The Cancer Genome Atlas (TCGA) were included in the study. Death was defined as primary endpoint. PITX2 methylation was correlated with overall survival and clinicopathological parameters. PITX2 methylation was significantly associated with sex, tumor site, p16 status, and grade. In univariate Cox proportional hazards analysis, PITX2 hypermethylation analyzed as continuous and dichotomized variable was significantly associated with prolonged overall survival of HNSCC patients (continuous: hazard ratio (HR) = 0.19 [95%CI: 0.04-0.88], p = 0.034; dichotomized: HR = 0.52 [95%CI: 0.33-0.84], p = 0.007). In multivariate Cox analysis including established clinicopathological parameters, PITX2 promoter methylation was confirmed as prognostic factor (HR = 0.28 [95%CI: 0.09-0.84], p = 0.023). Using an independent analytical platform, PITX2 methylation was validated as a prognostic biomarker in HNSCC patients, identifying patients that potentially benefit from intensified surveillance and/or administration of adjuvant/neodjuvant treatment, i.e. immunotherapy.
Fernandez-Calle, Pilar; Pelaz, Sandra; Oliver, Paloma; Alcaide, Maria Jose; Gomez-Rioja, Ruben; Buno, Antonio; Iturzaeta, Jose Manuel
2013-01-01
Technological innovation requires the laboratories to ensure that modifications or incorporations of new techniques do not alter the quality of their results. In an ISO 15189 accredited laboratory, flexible scope accreditation facilitates the inclusion of these changes prior to accreditation body evaluation. A strategy to perform the validation of a biochemistry analyzer in an accredited laboratory having a flexible scope is shown. A validation procedure including the evaluation of imprecision and bias of two Dimension Vista analysers 1500 was conducted. Comparability of patient results between one of them and the lately replaced Dimension RxL Max was evaluated. All studies followed the respective Clinical and Laboratory Standards Institute (CLSI) protocols. 30 chemistry assays were studied. Coefficients of variation, percent bias and total error were calculated for all tests and biological variation was considered as acceptance criteria. Quality control material and patient samples were used as test materials. Interchangeability of the results was established by processing forty patients' samples in both devices. 27 of the 30 studied parameters met allowable performance criteria. Sodium, chloride and magnesium did not fulfil acceptance criteria. Evidence of interchangeability of patient results was obtained for all parameters except magnesium, NT-proBNP, cTroponin I and C-reactive protein. A laboratory having a well structured and documented validation procedure can opt to get a flexible scope of accreditation. In addition, performing these activities prior to use on patient samples may evidence technical issues which must be corrected to minimize their impact on patient results.
Parameter Estimation and Model Validation of Nonlinear Dynamical Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abarbanel, Henry; Gill, Philip
In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.
Tunable diode-laser absorption measurements of methane at elevated temperatures
NASA Astrophysics Data System (ADS)
Nagali, V.; Chou, S. I.; Baer, D. S.; Hanson, R. K.; Segall, J.
1996-07-01
A diode-laser sensor system based on absorption spectroscopy techniques has been developed to monitor CH4 nonintrusively in high-temperature environments. Fundamental spectroscopic parameters, including the line strengths of the transitions in the R(6) manifold of the 2 nu 3 band near 1.646 mu m, have been determined from high-resolution absorption measurements in a heated static cell. In addition, a corrected expression for the CH 4 partition function has been validated experimentally over the temperature range from 400 to 915 K. Potential applications of the diode-laser sensor system include process control, combustion measurements, and atmospheric monitoring.
International Space Station Modal Correction Analysis
NASA Technical Reports Server (NTRS)
Fotz[atrocl. Lrostom; Grugoer. < ocjae; Laible, Michael; Sugavanam, Sujatha
2012-01-01
This paper summarizes the on-orbit modal test and the related modal analysis, model validation and correlation performed for the ISS Stage ULF4, DTF S4-1A, October 11,2010, GMT 284/06:13:00.00. The objective of this analysis is to validate and correlate analytical models with the intent to verify the ISS critical interface dynamic loads and improve fatigue life prediction. For the ISS configurations under consideration, on-orbit dynamic responses were collected with Russian vehicles attached and without the Orbiter attached to the ISS. ISS instrumentation systems that were used to collect the dynamic responses during the DTF S4-1A included the Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS), Structural Dynamic Measurement System (SDMS), Space Acceleration Measurement System (SAMS), Inertial Measurement Unit (IMU) and ISS External Cameras. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping and mode shape information. Correlation and comparisons between test and analytical modal parameters were performed to assess the accuracy of models for the ISS configuration under consideration. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. Section 2.0 of this report presents the math model used in the analysis. This section also describes the ISS configuration under consideration and summarizes the associated primary modes of interest along with the fundamental appendage modes. Section 3.0 discusses the details of the ISS Stage ULF4 DTF S4-1A test. Section 4.0 discusses the on-orbit instrumentation systems that were used in the collection of the data analyzed in this paper. The modal analysis approach and results used in the analysis of the collected data are summarized in Section 5.0. The model correlation and validation effort is reported in Section 6.0. Conclusions and recommendations drawn from this analysis are included in Section 7.0.
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
2018-06-08
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations
NASA Astrophysics Data System (ADS)
Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.
2016-12-01
The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.
Polarizable atomic multipole-based force field for DOPC and POPE membrane lipids
NASA Astrophysics Data System (ADS)
Chu, Huiying; Peng, Xiangda; Li, Yan; Zhang, Yuebin; Min, Hanyi; Li, Guohui
2018-04-01
A polarizable atomic multipole-based force field for the membrane bilayer models 1,2-dioleoyl-phosphocholine (DOPC) and 1-palmitoyl-2-oleoyl-phosphatidylethanolamine (POPE) has been developed. The force field adopts the same framework as the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) model, in which the charge distribution of each atom is represented by the permanent atomic monopole, dipole and quadrupole moments. Many-body polarization including the inter- and intra-molecular polarization is modelled in a consistent manner with distributed atomic polarizabilities. The van der Waals parameters were first transferred from existing AMOEBA parameters for small organic molecules and then optimised by fitting to ab initio intermolecular interaction energies between models and a water molecule. Molecular dynamics simulations of the two aqueous DOPC and POPE membrane bilayer systems, consisting of 72 model molecules, were then carried out to validate the force field parameters. Membrane width, area per lipid, volume per lipid, deuterium order parameters, electron density profile, etc. were consistent with experimental values.
NASA Astrophysics Data System (ADS)
Yamamoto, Shu; Ara, Takahiro
Recently, induction motors (IMs) and permanent-magnet synchronous motors (PMSMs) have been used in various industrial drive systems. The features of the hardware device used for controlling the adjustable-speed drive in these motors are almost identical. Despite this, different techniques are generally used for parameter measurement and speed-sensorless control of these motors. If the same technique can be used for parameter measurement and sensorless control, a highly versatile adjustable-speed-drive system can be realized. In this paper, the authors describe a new universal sensorless control technique for both IMs and PMSMs (including salient pole and nonsalient pole machines). A mathematical model applicable for IMs and PMSMs is discussed. Using this model, the authors derive the proposed universal sensorless vector control algorithm on the basis of estimation of the stator flux linkage vector. All the electrical motor parameters are determined by a unified test procedure. The proposed method is implemented on three test machines. The actual driving test results demonstrate the validity of the proposed method.
A method to investigate the diffusion properties of nuclear calcium.
Queisser, Gillian; Wittum, Gabriel
2011-10-01
Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Real-Time Gait Cycle Parameter Recognition Using a Wearable Accelerometry System
Yang, Che-Chang; Hsu, Yeh-Liang; Shih, Kao-Shang; Lu, Jun-Ming
2011-01-01
This paper presents the development of a wearable accelerometry system for real-time gait cycle parameter recognition. Using a tri-axial accelerometer, the wearable motion detector is a single waist-mounted device to measure trunk accelerations during walking. Several gait cycle parameters, including cadence, step regularity, stride regularity and step symmetry can be estimated in real-time by using autocorrelation procedure. For validation purposes, five Parkinson’s disease (PD) patients and five young healthy adults were recruited in an experiment. The gait cycle parameters among the two subject groups of different mobility can be quantified and distinguished by the system. Practical considerations and limitations for implementing the autocorrelation procedure in such a real-time system are also discussed. This study can be extended to the future attempts in real-time detection of disabling gaits, such as festinating or freezing of gait in PD patients. Ambulatory rehabilitation, gait assessment and personal telecare for people with gait disorders are also possible applications. PMID:22164019
Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle
NASA Technical Reports Server (NTRS)
Ali, Yasmin; Chuhta, Jesse D.; Hughes, Michael P.; Radke, Tara S.
2015-01-01
Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics models used to verify no re-contact. The NASA Orion Multi-Purpose Crew Vehicle (MPCV) architecture includes a highly-integrated Forward Bay Cover (FBC) jettison assembly design that combines parachutes and piston thrusters to separate the FBC from the Crew Module (CM) and avoid re-contact. A multi-disciplinary team across numerous organizations examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the FBC separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute elements, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1. Additional testing will be required to support human certification of this separation event, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust human-rated FBC separation event.
The effect of uphill and downhill walking on gait parameters: A self-paced treadmill study.
Kimel-Naor, Shani; Gottlieb, Amihai; Plotnik, Meir
2017-07-26
It has been shown that gait parameters vary systematically with the slope of the surface when walking uphill (UH) or downhill (DH) (Andriacchi et al., 1977; Crowe et al., 1996; Kawamura et al., 1991; Kirtley et al., 1985; McIntosh et al., 2006; Sun et al., 1996). However, gait trials performed on inclined surfaces have been subject to certain technical limitations including using fixed speed treadmills (TMs) or, alternatively, sampling only a few gait cycles on inclined ramps. Further, prior work has not analyzed upper body kinematics. This study aims to investigate effects of slope on gait parameters using a self-paced TM (SPTM) which facilitates more natural walking, including measuring upper body kinematics and gait coordination parameters. Gait of 11 young healthy participants was sampled during walking in steady state speed. Measurements were made at slopes of +10°, 0° and -10°. Force plates and a motion capture system were used to reconstruct twenty spatiotemporal gait parameters. For validation, previously described parameters were compared with the literature, and novel parameters measuring upper body kinematics and bilateral gait coordination were also analyzed. Results showed that most lower and upper body gait parameters were affected by walking slope angle. Specifically, UH walking had a higher impact on gait kinematics than DH walking. However, gait coordination parameters were not affected by walking slope, suggesting that gait asymmetry, left-right coordination and gait variability are robust characteristics of walking. The findings of the study are discussed in reference to a potential combined effect of slope and gait speed. Follow-up studies are needed to explore the relative effects of each of these factors. Copyright © 2017. Published by Elsevier Ltd.
Verster, Joris C; Roth, Thomas
2012-03-01
There are various methods to examine driving ability. Comparisons between these methods and their relationship with actual on-road driving is often not determined. The objective of this study was to determine whether laboratory tests measuring driving-related skills could adequately predict on-the-road driving performance during normal traffic. Ninety-six healthy volunteers performed a standardized on-the-road driving test. Subjects were instructed to drive with a constant speed and steady lateral position within the right traffic lane. Standard deviation of lateral position (SDLP), i.e., the weaving of the car, was determined. The subjects also performed a psychometric test battery including the DSST, Sternberg memory scanning test, a tracking test, and a divided attention test. Difference scores from placebo for parameters of the psychometric tests and SDLP were computed and correlated with each other. A stepwise linear regression analysis determined the predictive validity of the laboratory test battery to SDLP. Stepwise regression analyses revealed that the combination of five parameters, hard tracking, tracking and reaction time of the divided attention test, and reaction time and percentage of errors of the Sternberg memory scanning test, together had a predictive validity of 33.4%. The psychometric tests in this test battery showed insufficient predictive validity to replace the on-the-road driving test during normal traffic.
Raising the standards of the calf-raise test: a systematic review.
Hébert-Losier, Kim; Newsham-West, Richard J; Schneiders, Anthony G; Sullivan, S John
2009-11-01
The calf-raise test is used by clinicians and researchers in sports medicine to assess properties of the calf muscle-tendon unit. The test generally involves repetitive concentric-eccentric muscle action of the plantar-flexors in unipedal stance and is quantified by the number of raises performed. Although the calf-raise test appears to have acceptable reliability and face validity, and is commonly used for medical assessment and rehabilitation of injuries, no universally acceptable test parameters have been published to date. A systematic review of the existing literature was conducted to investigate the consistency as well as universal acceptance of the evaluation purposes, test parameters, outcome measurements and psychometric properties of the calf-raise test. Nine electronic databases were searched during the period May 30th to September 21st 2008. Forty-nine articles met the inclusion criteria and were quality assessed. Information on study characteristics and calf-raise test parameters, as well as quantitative data, were extracted; tabulated; and statistically analysed. The average quality score of the reviewed articles was 70.4+/-12.2% (range 44-90%). Articles provided various test parameters; however, a consensus was not ascertained. Key testing parameters varied, were often unstated, and few studies reported reliability or validity values, including sensitivity and specificity. No definitive normative values could be established and the utility of the test in subjects with pathologies remained unclear. Although adapted for use in several disciplines and traditionally recommended for clinical assessment, there is no uniform description of the calf-raise test in the literature. Further investigation is recommended to ensure consistent use and interpretation of the test by researchers and clinicians.
NASA Astrophysics Data System (ADS)
North, M. R.; Petropoulos, G. P.; Ireland, G.; McCalmont, J. P.
2015-02-01
In this present study the ability of the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model in estimating key parameters characterising land surface interactions was evaluated. Specifically, SimSphere's performance in predicting Net Radiation (Rnet), Latent Heat (LE), Sensible Heat (H) and Air Temperature (Tair) at 1.3 and 50 m was examined. Model simulations were validated by ground-based measurements of the corresponding parameters for a total of 70 days of the year 2011 from 7 CarboEurope network sites. These included a variety of biomes, environmental and climatic conditions in the models evaluation. Overall, model performance can largely be described as satisfactory for most of the experimental sites and evaluated parameters. For all model parameters compared, predicted H fluxes consistently obtained the highest agreement to the in-situ data in all ecosystems, with an average RMSD of 55.36 W m-2. LE fluxes and Rnet also agreed well with the in-situ data with RSMDs of 62.75 and 64.65 W m-2 respectively. A good agreement between modelled and measured LE and H fluxes was found, especially for smoothed daily flux trends. For both Tair 1.3 m and Tair 50 m a mean RMSD of 4.14 and 3.54 °C was reported respectively. This work presents the first all-inclusive evaluation of SimSphere, particularly so in a European setting. Results of this study contribute decisively towards obtaining a better understanding of the model's structure and its correspondence to the real world system. Findings also further establish the model's capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of the model worldwide, including ongoing research by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale.
Li, Yang; Zhang, Zhenjun; Liao, Zhenhua; Mo, Zhongjun; Liu, Weiqiang
2017-10-01
Finite element models have been widely used to predict biomechanical parameters of the cervical spine. Previous studies investigated the influence of position of rotational centers of prostheses on cervical biomechanical parameters after 1-level total disc replacement. The purpose of this study was to explore the effects of axial position of rotational centers of prostheses on cervical biomechanics after 2-level total disc replacement. A validated finite element model of C3-C7 segments and 2 prostheses, including the rotational center located at the superior endplate (SE) and inferior endplate (IE), was developed. Four total disc replacement models were used: 1) IE inserted at C4-C5 disc space and IE inserted at C5-C6 disc space (IE-IE), 2) IE-SE, 3) SE-IE, and 4) SE-SE. All models were subjected to displacement control combined with a 50 N follower load to simulate flexion and extension motions in the sagittal plane. For each case, biomechanical parameters, including predicted moments, range of rotation at each level, facet joint stress, and von Mises stress on the ultra-high-molecular-weight polyethylene core of the prostheses, were calculated. The SE-IE model resulted in significantly lower stress at the cartilage level during extension and at the ultra-high-molecular-weight polyethylene cores when compared with the SE-SE construct and did not generate hypermotion at the C4-C5 level compared with the IE-SE and IE-IE constructs. Based on the present analysis, the SE-IE construct is recommended for treating cervical disease at the C4-C6 level. This study may provide a useful model to inform clinical operations. Copyright © 2017 Elsevier Inc. All rights reserved.
Timiryasova, Tatyana M.; Bonaparte, Matthew I.; Luo, Ping; Zedar, Rebecca; Hu, Branda T.; Hildreth, Stephen W.
2013-01-01
A dengue plaque reduction neutralization test (PRNT) to measure dengue serotype–specific neutralizing antibodies for all four virus serotypes was developed, optimized, and validated in accordance with guidelines for validation of bioanalytical test methods using human serum samples from dengue-infected persons and persons receiving a dengue vaccine candidate. Production and characterization of dengue challenge viruses used in the assay was standardized. Once virus stocks were characterized, the dengue PRNT50 for each of the four serotypes was optimized according to a factorial design of experiments approach for critical test parameters, including days of cell seeding before testing, percentage of overlay carboxymethylcellulose medium, and days of incubation post-infection to generate a robust assay. The PRNT50 was then validated and demonstrated to be suitable to detect and measure dengue serotype-specific neutralizing antibodies in human serum samples with acceptable intra-assay and inter-assay precision, accuracy/dilutability, specificity, and with a lower limit of quantitation of 10. PMID:23458954
NPOESS Preparatory Project Validation Program for the Cross-track Infrared Sounder
NASA Astrophysics Data System (ADS)
Barnet, C.; Gu, D.; Nalli, N. R.
2009-12-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Program, in partnership with National Aeronautical Space Administration (NASA), will launch the NPOESS Preparatory Project (NPP), a risk reduction and data continuity mission, prior to the first operational NPOESS launch. The NPOESS Program, in partnership with Northrop Grumman Aerospace Systems, will execute the NPP Calibration and Validation (Cal/Val) program to ensure the data products comply with the requirements of the sponsoring agencies. The Cross-track Infrared Sounder (CrIS) and the Advanced Technology Microwave Sounder (ATMS) are two of the instruments that make up the suite of sensors on NPP. Together, CrIS and ATMS will produce three Environmental Data Records (EDRs) including the Atmospheric Vertical Temperature Profile (AVTP), Atmospheric Vertical Moisture Profile (AVMP), and the Atmospheric Vertical Pressure Profile (AVPP). The AVTP and the AVMP are both NPOESS Key Performance Parameters (KPPs). The validation plans establish science and user community leadership and participation, and demonstrated, cost-effective Cal/Val approaches. This presentation will provide an overview of the collaborative data, techniques, and schedule for the validation of the NPP CrIS and ATMS environmental data products.
Evaluation of the Validated Soil Moisture Product from the SMAP Radiometer
NASA Technical Reports Server (NTRS)
O'Neill, P.; Chan, S.; Colliander, A.; Dunbar, S.; Njoku, E.; Bindlish, R.; Chen, F.; Jackson, T.; Burgin, M.; Piepmeier, J.;
2016-01-01
NASA's Soil Moisture Active Passive (SMAP) mission launched on January 31, 2015 into a sun-synchronous 6 am/6 pm orbit with an objective to produce global mapping of high-resolution soil moisture and freeze-thaw state every 2-3 days using an L-band (active) radar and an L-band (passive) radiometer. The SMAP radiometer began acquiring routine science data on March 31, 2015 and continues to operate nominally. SMAP's radiometer-derived soil moisture product (L2_SM_P) provides soil moisture estimates posted on a 36 km fixed Earth grid using brightness temperature observations from descending (6 am) passes and ancillary data. A beta quality version of L2_SM_P was released to the public in September, 2015, with the fully validated L2_SM_P soil moisture data expected to be released in May, 2016. Additional improvements (including optimization of retrieval algorithm parameters and upscaling approaches) and methodology expansions (including increasing the number of core sites, model-based intercomparisons, and results from several intensive field campaigns) are anticipated in moving from accuracy assessment of the beta quality data to an evaluation of the fully validated L2_SM_P data product.
A Diagnostic Assessment for Introductory Molecular and Cell Biology
Wood, William B.; Martin, Jennifer M.; Guild, Nancy A.; Vicens, Quentin; Knight, Jennifer K.
2010-01-01
We have developed and validated a tool for assessing understanding of a selection of fundamental concepts and basic knowledge in undergraduate introductory molecular and cell biology, focusing on areas in which students often have misconceptions. This multiple-choice Introductory Molecular and Cell Biology Assessment (IMCA) instrument is designed for use as a pre- and posttest to measure student learning gains. To develop the assessment, we first worked with faculty to create a set of learning goals that targeted important concepts in the field and seemed likely to be emphasized by most instructors teaching these subjects. We interviewed students using open-ended questions to identify commonly held misconceptions, formulated multiple-choice questions that included these ideas as distracters, and reinterviewed students to establish validity of the instrument. The assessment was then evaluated by 25 biology experts and modified based on their suggestions. The complete revised assessment was administered to more than 1300 students at three institutions. Analysis of statistical parameters including item difficulty, item discrimination, and reliability provides evidence that the IMCA is a valid and reliable instrument with several potential uses in gauging student learning of key concepts in molecular and cell biology. PMID:21123692
Jongen, S; Vuurman, E F P M; Ramaekers, J G; Vermeeren, A
2016-04-01
Laboratory tests assessing driving related skills can be useful as initial screening tools to assess potential drug induced impairment as part of a standardized behavioural assessment. Unfortunately, consensus about which laboratory tests should be included to reliably assess drug induced impairment has not yet been reached. The aim of the present review was to evaluate the sensitivity of laboratory tests to the dose dependent effects of alcohol, as a benchmark, on performance parameters. In total, 179 experimental studies were included. Results show that a cued go/no-go task and a divided attention test with primary tracking and secondary visual search were consistently sensitive to the impairing effects at medium and high blood alcohol concentrations. Driving performance assessed in a simulator was less sensitive to the effects of alcohol as compared to naturalistic, on-the-road driving. In conclusion, replicating results of several potentially useful tests and their predictive validity of actual driving impairment should deserve further research. In addition, driving simulators should be validated and compared head to head to naturalistic driving in order to increase construct validity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Quantitative model validation of manipulative robot systems
NASA Astrophysics Data System (ADS)
Kartowisastro, Iman Herwidiana
This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.
Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.
We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti cation of uncertainty in quantities of interest (QoIs) andmore » the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.« less
A Method of Q-Matrix Validation for the Linear Logistic Test Model
Baghaei, Purya; Hohensinn, Christine
2017-01-01
The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721
Alley, William M.
1984-01-01
Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation. Some problems in parameter identification are noted, including difficulties in identifying an appropriate time lag factor for the Thornthwaite-Mather-type model for basins with little groundwater storage, very high correlations between upper and lower storages in the Palmer-type model, and large sensitivity of parameter a of the abcd model to bias in estimates of precipitation and potential evapotranspiration. Modifications to the threshold concept of the Thornthwaite-Mather model were statistically valid for the six stations in northern New Jersey. The abcd model resulted in a simulated seasonal cycle of groundwater levels similar to fluctuations observed in nearby wells but with greater persistence. These results suggest that extreme caution should be used in attaching physical significance to model parameters and in using the state variables of the models in indices of drought and basin productivity.
Direct match data flow machine apparatus and process for data driven computing
Davidson, G.S.; Grafe, V.G.
1997-08-12
A data flow computer and method of computing are disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.
Data flow machine for data driven computing
Davidson, G.S.; Grafe, V.G.
1988-07-22
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information from an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ''fire'' signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.
Data flow machine for data driven computing
Davidson, George S.; Grafe, Victor G.
1995-01-01
A data flow computer which of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.
Direct match data flow machine apparatus and process for data driven computing
Davidson, George S.; Grafe, Victor Gerald
1997-01-01
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.
Direct match data flow memory for data driven computing
Davidson, George S.; Grafe, Victor Gerald
1997-01-01
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.
Direct match data flow memory for data driven computing
Davidson, G.S.; Grafe, V.G.
1997-10-07
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.
Laser-sodium interaction for the polychromatic laser guide star project
NASA Astrophysics Data System (ADS)
Bellanger, Veronique; Petit, Alain D.
2002-02-01
We developed a code aimed at determining the laser parameters leading to the maximum return flux of photons at 0.33 micrometers for a polychromatic sodium Laser Guide Star. This software relies upon a full 48-level collisionless and magnetic-field-free density-matrix description of the hyperfine structure of Na and includes Doppler broadening and Zeeman degeneracy. Experimental validation of BEACON was conducted on the SILVA facilities and will also be discussed in this paper.
Calculating Mass Diffusion in High-Pressure Binary Fluids
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2004-01-01
A comprehensive mathematical model of mass diffusion has been developed for binary fluids at high pressures, including critical and supercritical pressures. Heretofore, diverse expressions, valid for limited parameter ranges, have been used to correlate high-pressure binary mass-diffusion-coefficient data. This model will likely be especially useful in the computational simulation and analysis of combustion phenomena in diesel engines, gas turbines, and liquid rocket engines, wherein mass diffusion at high pressure plays a major role.
Unified Description of Scattering and Propagation FY15 Annual Report
2015-09-30
the Texas coast. For both cases a conditional posterior probability distribution ( PPD ) is formed for a parameter space that includes both geoacoustic...for this second application of ME. For each application of ME it is important to note that a new likelihood function and thus PPD is computed. One...the 50-700 Hz band. These data offered a means by which the results of using the ship radiated noise could be partially validated. The conditional PPD
Code of Federal Regulations, 2013 CFR
2013-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Code of Federal Regulations, 2012 CFR
2012-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Code of Federal Regulations, 2014 CFR
2014-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Code of Federal Regulations, 2010 CFR
2010-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Yung-Chuan; Tu, Yuan-Kun; Zhuang, Jun-Yan; Tsai, Yi-Jung; Yen, Cheng-Yo; Hsiao, Chih-Kun
2017-11-01
A three-dimensional dynamic elastoplastic finite element model was constructed and experimentally validated and was used to investigate the parameters which influence bone temperature during drilling, including the drill speed, feeding force, drill bit diameter, and bone density. Results showed the proposed three-dimensional dynamic elastoplastic finite element model can effectively simulate the temperature elevation during bone drilling. The bone temperature rise decreased with an increase in feeding force and drill speed, however, increased with the diameter of drill bit or bone density. The temperature distribution is significantly affected by the drilling duration; a lower drilling speed reduced the exposure duration, decreases the region of the thermally affected zone. The constructed model could be applied for analyzing the influence parameters during bone drilling to reduce the risk of thermal necrosis. It may provide important information for the design of drill bits and surgical drilling powers.
Measuring multielectron beam imaging fidelity with a signal-to-noise ratio analysis
NASA Astrophysics Data System (ADS)
Mukhtar, Maseeh; Bunday, Benjamin D.; Quoi, Kathy; Malloy, Matt; Thiel, Brad
2016-07-01
Java Monte Carlo Simulator for Secondary Electrons (JMONSEL) simulations are used to generate expected imaging responses of chosen test cases of patterns and defects with the ability to vary parameters for beam energy, spot size, pixel size, and/or defect material and form factor. The patterns are representative of the design rules for an aggressively scaled FinFET-type design. With these simulated images and resulting shot noise, a signal-to-noise framework is developed, which relates to defect detection probabilities. Additionally, with this infrastructure, the effect of detection chain noise and frequency-dependent system response can be made, allowing for targeting of best recipe parameters for multielectron beam inspection validation experiments. Ultimately, these results should lead to insights into how such parameters will impact tool design, including necessary doses for defect detection and estimations of scanning speeds for achieving high throughput for high-volume manufacturing.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Deep learning for neuroimaging: a validation study.
Plis, Sergey M; Hjelm, Devon R; Salakhutdinov, Ruslan; Allen, Elena A; Bockholt, Henry J; Long, Jeffrey D; Johnson, Hans J; Paulsen, Jane S; Turner, Jessica A; Calhoun, Vince D
2014-01-01
Deep learning methods have recently made notable advances in the tasks of classification and representation learning. These tasks are important for brain imaging and neuroscience discovery, making the methods attractive for porting to a neuroimager's toolbox. Success of these methods is, in part, explained by the flexibility of deep learning models. However, this flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. These methods include deep belief networks and their building block the restricted Boltzmann machine. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.
TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Grady, K; Davis, S; Seuntjens, J
Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10more » × 10 cm{sup 2} Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm{sup 2} PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
Nam, J G; Kang, K M; Choi, S H; Lim, W H; Yoo, R-E; Kim, J-H; Yun, T J; Sohn, C-H
2017-12-01
Glioblastoma is the most common primary brain malignancy and differentiation of true progression from pseudoprogression is clinically important. Our purpose was to compare the diagnostic performance of dynamic contrast-enhanced pharmacokinetic parameters using the fixed T1 and measured T1 on differentiating true from pseudoprogression of glioblastoma after chemoradiation with temozolomide. This retrospective study included 37 patients with histopathologically confirmed glioblastoma with new enhancing lesions after temozolomide chemoradiation defined as true progression ( n = 15) or pseudoprogression ( n = 22). Dynamic contrast-enhanced pharmacokinetic parameters, including the volume transfer constant, the rate transfer constant, the blood plasma volume per unit volume, and the extravascular extracellular space per unit volume, were calculated by using both the fixed T1 of 1000 ms and measured T1 by using the multiple flip-angle method. Intra- and interobserver reproducibility was assessed by using the intraclass correlation coefficient. Dynamic contrast-enhanced pharmacokinetic parameters were compared between the 2 groups by using univariate and multivariate analysis. The diagnostic performance was evaluated by receiver operating characteristic analysis and leave-one-out cross validation. The intraclass correlation coefficients of all the parameters from both T1 values were fair to excellent (0.689-0.999). The volume transfer constant and rate transfer constant from the fixed T1 were significantly higher in patients with true progression ( P = .048 and .010, respectively). Multivariate analysis revealed that the rate transfer constant from the fixed T1 was the only independent variable (OR, 1.77 × 10 5 ) and showed substantial diagnostic power on receiver operating characteristic analysis (area under the curve, 0.752; P = .002). The sensitivity and specificity on leave-one-out cross validation were 73.3% (11/15) and 59.1% (13/20), respectively. The dynamic contrast-enhanced parameter of rate transfer constant from the fixed T1 acted as a preferable marker to differentiate true progression from pseudoprogression. © 2017 by American Journal of Neuroradiology.
Cloud and Thermodynamic Parameters Retrieved from Satellite Ultraspectral Infrared Measurements
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Smith, William L.; Larar, Allen M.; Liu, Xu; Taylor, Jonathan P.; Schluessel, Peter; Strow, L. Larrabee; Mango, Stephen A.
2008-01-01
Atmospheric-thermodynamic parameters and surface properties are basic meteorological parameters for weather forecasting. A physical geophysical parameter retrieval scheme dealing with cloudy and cloud-free radiance observed with satellite ultraspectral infrared sounders has been developed and applied to the Infrared Atmospheric Sounding Interferometer (IASI) and the Atmospheric InfraRed Sounder (AIRS). The retrieved parameters presented herein are from radiance data gathered during the Joint Airborne IASI Validation Experiment (JAIVEx). JAIVEx provided intensive aircraft observations obtained from airborne Fourier Transform Spectrometer (FTS) systems, in-situ measurements, and dedicated dropsonde and radiosonde measurements for the validation of the IASI products. Here, IASI atmospheric profile retrievals are compared with those obtained from dedicated dropsondes, radiosondes, and the airborne FTS system. The IASI examples presented here demonstrate the ability to retrieve fine-scale horizontal features with high vertical resolution from satellite ultraspectral sounder radiance spectra.
A methodology for spectral wave model evaluation
NASA Astrophysics Data System (ADS)
Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.
2017-12-01
Model evaluation is accomplished by comparing bulk parameters (e.g., significant wave height, energy period, and mean square slope (MSS)) calculated from the model energy spectra with those calculated from buoy energy spectra. Quality control of the observed data and choice of the frequency range from which the bulk parameters are calculated are critical steps in ensuring the validity of the model-data comparison. The compared frequency range of each observation and the analogous model output must be identical, and the optimal frequency range depends in part on the reliability of the observed spectra. National Data Buoy Center 3-m discus buoy spectra are unreliable above 0.3 Hz due to a non-optimal buoy response function correction. As such, the upper end of the spectrum should not be included when comparing a model to these data. Bioufouling of Waverider buoys must be detected, as it can harm the hydrodynamic response of the buoy at high frequencies, thereby rendering the upper part of the spectrum unsuitable for comparison. An important consideration is that the intentional exclusion of high frequency energy from a validation due to data quality concerns (above) can have major implications for validation exercises, especially for parameters such as the third and fourth moments of the spectrum (related to Stokes drift and MSS, respectively); final conclusions can be strongly altered. We demonstrate this by comparing outcomes with and without the exclusion, in a case where a Waverider buoy is believed to be free of biofouling. Determination of the appropriate frequency range is not limited to the observed spectra. Model evaluation involves considering whether all relevant frequencies are included. Guidance to make this decision is based on analysis of observed spectra. Two model frequency lower limits were considered. Energy in the observed spectrum below the model lower limit was calculated for each. For locations where long swell is a component of the wave climate, omitting the energy in the frequency band between the two lower limits tested can lead to an incomplete characterization of model performance. This methodology was developed to aid in selecting a comparison frequency range that does not needlessly increase computational expense and does not exclude energy to the detriment of model performance analysis.
NASA Astrophysics Data System (ADS)
An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen
2018-05-01
In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.
NASA Astrophysics Data System (ADS)
Barforoush, M. S. M.; Saedodin, S.
2018-01-01
This article investigates the thermal performance of convective-radiative annular fins with a step reduction in local cross section (SRC). The thermal conductivity of the fin's material is assumed to be a linear function of temperature, and heat transfer coefficient is assumed to be a power-law function of surface temperature. Moreover, nonzero convection and radiation sink temperatures are included in the mathematical model of the energy equation. The well-known differential transformation method (DTM) is used to derive the analytical solution. An exact analytical solution for a special case is derived to prove the validity of the obtained results from the DTM. The model provided here is a more realistic representation of SRC annular fins in actual engineering practices. Effects of many parameters such as conduction-convection parameters, conduction-radiation parameter and sink temperature, and also some parameters which deal with step fins such as thickness parameter and dimensionless parameter describing the position of junction in the fin on the temperature distribution of both thin and thick sections of the fin are investigated. It is believed that the obtained results will facilitate the design and performance evaluation of SRC annular fins.
NASA Technical Reports Server (NTRS)
Stutte, G. W.; Mackowiak, C. L.; Markwell, G. A.; Wheeler, R. M.; Sager, J. C.
1993-01-01
This KSC database is being made available to the scientific research community to facilitate the development of crop development models, to test monitoring and control strategies, and to identify environmental limitations in crop production systems. The KSC validated dataset consists of 17 parameters necessary to maintain bioregenerative life support functions: water purification, CO2 removal, O2 production, and biomass production. The data are available on disk as either a DATABASE SUBSET (one week of 5-minute data) or DATABASE SUMMARY (daily averages of parameters). Online access to the VALIDATED DATABASE will be made available to institutions with specific programmatic requirements. Availability and access to the KSC validated database are subject to approval and limitations implicit in KSC computer security policies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Wan, L.; Guo, Z. H.
Isothermal compression experiment of AZ80 magnesium alloy was conducted by Gleeble thermo-mechanical simulator in order to quantitatively investigate the work hardening (WH), strain rate sensitivity (SRS) and temperature sensitivity (TS) during hot processing of magnesium alloys. The WH, SRS and TS were described by Zener-Hollomon parameter (Z) coupling of deformation parameters. The relationships between WH rate and true strain as well as true stress were derived from Kocks-Mecking dislocation model and validated by our measurement data. The slope defined through the linear relationship of WH rate and true stress was only related to the annihilation coefficient Ω. Obvious WH behaviormore » could be exhibited at a higher Z condition. Furthermore, we have identified the correlation between the microstructural evolution including β-Mg17Al12 precipitation and the SRS and TS variations. Intensive dynamic recrystallization and homogeneous distribution of β-Mg17Al12 precipitates resulted in greater SRS coefficient at higher temperature. The deformation heat effect and β-Mg17Al12 precipitate content can be regarded as the major factors determining the TS behavior. At low Z condition, the SRS becomes stronger, in contrast to the variation of TS. The optimum hot processing window was validated based on the established SRS and TS values distribution maps for AZ80 magnesium alloy.« less
[Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].
Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong
2013-03-01
Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.
Glaude, Pierre Alexandre; Herbinet, Olivier; Bax, Sarah; Biet, Joffrey; Warth, Valérie; Battin-Leclerc, Frédérique
2013-01-01
The modeling of the oxidation of methyl esters was investigated and the specific chemistry, which is due to the presence of the ester group in this class of molecules, is described. New reactions and rate parameters were defined and included in the software EXGAS for the automatic generation of kinetic mechanisms. Models generated with EXGAS were successfully validated against data from the literature (oxidation of methyl hexanoate and methyl heptanoate in a jet-stirred reactor) and a new set of experimental results for methyl decanoate. The oxidation of this last species was investigated in a jet-stirred reactor at temperatures from 500 to 1100 K, including the negative temperature coefficient region, under stoichiometric conditions, at a pressure of 1.06 bar and for a residence time of 1.5 s: more than 30 reaction products, including olefins, unsaturated esters, and cyclic ethers, were quantified and successfully simulated. Flow rate analysis showed that reactions pathways for the oxidation of methyl esters in the low-temperature range are similar to that of alkanes. PMID:23710076
Shen, Qijun; Shan, Yanna; Hu, Zhengyu; Chen, Wenhui; Yang, Bing; Han, Jing; Huang, Yanfang; Xu, Wen; Feng, Zhan
2018-04-30
To objectively quantify intracranial hematoma (ICH) enlargement by analysing the image texture of head CT scans and to provide objective and quantitative imaging parameters for predicting early hematoma enlargement. We retrospectively studied 108 ICH patients with baseline non-contrast computed tomography (NCCT) and 24-h follow-up CT available. Image data were assessed by a chief radiologist and a resident radiologist. Consistency analysis between observers was tested. The patients were divided into training set (75%) and validation set (25%) by stratified sampling. Patients in the training set were dichotomized according to 24-h hematoma expansion ≥ 33%. Using the Laplacian of Gaussian bandpass filter, we chose different anatomical spatial domains ranging from fine texture to coarse texture to obtain a series of derived parameters (mean grayscale intensity, variance, uniformity) in order to quantify and evaluate all data. The parameters were externally validated on validation set. Significant differences were found between the two groups of patients within variance at V 1.0 and in uniformity at U 1.0 , U 1.8 and U 2.5 . The intraclass correlation coefficients for the texture parameters were between 0.67 and 0.99. The area under the ROC curve between the two groups of ICH cases was between 0.77 and 0.92. The accuracy of validation set by CTTA was 0.59-0.85. NCCT texture analysis can objectively quantify the heterogeneity of ICH and independently predict early hematoma enlargement. • Heterogeneity is helpful in predicting ICH enlargement. • CTTA could play an important role in predicting early ICH enlargement. • After filtering, fine texture had the best diagnostic performance. • The histogram-based uniformity parameters can independently predict ICH enlargement. • CTTA is more objective, more comprehensive, more independently operable, than previous methods.
Parameter Selection Methods in Inverse Problem Formulation
2010-11-03
clinical data and used for prediction and a model for the reaction of the cardiovascular system to an ergometric workload. Key Words: Parameter selection...model for HIV dynamics which has been successfully validated with clinical data and used for prediction and a model for the reaction of the...recently developed in-host model for HIV dynamics which has been successfully validated with clinical data and used for prediction [4, 8]; b) a global
Mentiplay, Benjamin F; Perraton, Luke G; Bower, Kelly J; Pua, Yong-Hao; McGaw, Rebekah; Heywood, Sophie; Clark, Ross A
2015-07-16
The revised Xbox One Kinect, also known as the Microsoft Kinect V2 for Windows, includes enhanced hardware which may improve its utility as a gait assessment tool. This study examined the concurrent validity and inter-day reliability of spatiotemporal and kinematic gait parameters estimated using the Kinect V2 automated body tracking system and a criterion reference three-dimensional motion analysis (3DMA) marker-based camera system. Thirty healthy adults performed two testing sessions consisting of comfortable and fast paced walking trials. Spatiotemporal outcome measures related to gait speed, speed variability, step length, width and time, foot swing velocity and medial-lateral and vertical pelvis displacement were examined. Kinematic outcome measures including ankle flexion, knee flexion and adduction and hip flexion were examined. To assess the agreement between Kinect and 3DMA systems, Bland-Altman plots, relative agreement (Pearson's correlation) and overall agreement (concordance correlation coefficients) were determined. Reliability was assessed using intraclass correlation coefficients, Cronbach's alpha and standard error of measurement. The spatiotemporal measurements had consistently excellent (r≥0.75) concurrent validity, with the exception of modest validity for medial-lateral pelvis sway (r=0.45-0.46) and fast paced gait speed variability (r=0.73). In contrast kinematic validity was consistently poor to modest, with all associations between the systems weak (r<0.50). In those measures with acceptable validity, the inter-day reliability was similar between systems. In conclusion, while the Kinect V2 body tracking may not accurately obtain lower body kinematic data, it shows great potential as a tool for measuring spatiotemporal aspects of gait. Copyright © 2015 Elsevier Ltd. All rights reserved.
Complex dynamics of a new 3D Lorenz-type autonomous chaotic system
NASA Astrophysics Data System (ADS)
Zhang, Fuchen; Liao, Xiaofeng; Zhang, Guangyun; Mu, Chunlai
2017-12-01
This paper investigates a new three-dimensional continuous quadratic autonomous chaotic system which is not topologically equivalent to the Lorenz system. The dynamical behaviours of this system are further investigated in detail, including the ultimate boundedness, the invariant sets and the global attraction domain according to Lyapunov stability theory of dynamical systems. The innovation of the paper lies in the fact that this paper not only proves this chaotic system is globally bounded for the parameters of this system but also gives a family of mathematical expressions of global exponential attractive sets with respect to the parameters of this system. To validate the ultimate bound estimation, numerical simulations are also investigated. Numerical simulations verify the effectiveness and feasibility of the theoretical scheme.
NASA Technical Reports Server (NTRS)
Lovelace, Uriel M.
1961-01-01
Reentry trajectories, including computations of convective and radiative stagnation-point heat transfer, have been calculated by using equations for a point-mass reentry vehicle entering the atmosphere of a rotating, oblate earth. Velocity was varied from 26,000 to 45,000 feet per second; reentry angle, from the skip limit to -20 deg; ballistic drag parameter, from 50 to 200. Initial altitude was 400,000 feet. Explicit results are presented in charts which were computed for an initial latitude of 38 deg N and an azimuth of 90 deg from north. A method is presented whereby these results may be made valid for a range of initial latitude and azimuth angles.
Measurement of the static and dynamic coefficients of a cross-type parachute in subsonic flow
NASA Technical Reports Server (NTRS)
Shpund, Zalman; Levin, Daniel
1991-01-01
An experimental parametric investigation of the aerodynamic qualities of cross-type parachutes was performed in a subsonic wind tunnel, using a new experimental technique. This investigation included the measurement of the static and dynamic aerodynamic coefficients, utilizing the measuring apparatus modified specifically for this type of testing. It is shown that the static aerodynamic coefficients of several configurations are in good agreement with available data, and assisted in validating the experimental technique employed. Two configuration parameters were varied in the static tests, the cord length and the canopy aspect ratio, with both parameters having a similar effect on the drag measurement, i.e., any increase in either of them increased the effective blocking area, and therefore the axial force.
Universal DC Hall conductivity of Jain's state ν = N/2N +/- 1
NASA Astrophysics Data System (ADS)
Nguyen, Dung; Son, Dam
We present the Fermi-liquid theory of the fractional quantum Hall effect to describe Jain's states with filling fraction ν =N/2 N +/- 1 , that are near half filling. We derive the DC Hall conductivity σH (t) in closed form within the validity of our model. The results show that, without long range interaction, DC Hall conductivity has the universal form which doesn't depend on the detail of short range Landau's parameters Fn. When long range interaction is included, DC Hall conductivity depends on both long range interaction and Landau's parameters. We also analyze the relation between DC Hall conductivity and static structure factor. This work was supported by the Chicago MRSEC, which is funded by NSF through Grant DMR-1420709.
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Root zone water quality model (RZWQM2): Model use, calibration and validation
Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.
2012-01-01
The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.
Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.
2014-07-01
The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.
Structure-activity relationships for serotonin transporter and dopamine receptor selectivity.
Agatonovic-Kustrin, Snezana; Davies, Paul; Turner, Joseph V
2009-05-01
Antipsychotic medications have a diverse pharmacology with affinity for serotonergic, dopaminergic, adrenergic, histaminergic and cholinergic receptors. Their clinical use now also includes the treatment of mood disorders, thought to be mediated by serotonergic receptor activity. The aim of our study was to characterise the molecular properties of antipsychotic agents, and to develop a model that would indicate molecular specificity for the dopamine (D(2)) receptor and the serotonin (5-HT) transporter. Back-propagation artificial neural networks (ANNs) were trained on a dataset of 47 ligands categorically assigned antidepressant or antipsychotic utility. The structure of each compound was encoded with 63 calculated molecular descriptors. ANN parameters including hidden neurons and input descriptors were optimised based on sensitivity analyses, with optimum models containing between four and 14 descriptors. Predicted binding preferences were in excellent agreement with clinical antipsychotic or antidepressant utility. Validated models were further tested by use of an external prediction set of five drugs with unknown mechanism of action. The SAR models developed revealed the importance of simple molecular characteristics for differential binding to the D(2) receptor and the 5-HT transporter. These included molecular size and shape, solubility parameters, hydrogen donating potential, electrostatic parameters, stereochemistry and presence of nitrogen. The developed models and techniques employed are expected to be useful in the rational design of future therapeutic agents.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... Effectiveness of a Proposed Rule Change Relating to FINRA Trade Reporting Notice on Price Validation and Price... (``Notice'') that explains the price validation protocol of the FINRA trade reporting facilities and sets... trades by comparing the submitted price against price validation parameters established by FINRA...
Validation of a novel virtual reality simulator for robotic surgery.
Schreuder, Henk W R; Persson, Jan E U; Wolswijk, Richard G H; Ihse, Ingmar; Schijven, Marlies P; Verheijen, René H M
2014-01-01
With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were "time to complete" and "economy of motion" (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery.
Validation of a Novel Virtual Reality Simulator for Robotic Surgery
Schreuder, Henk W. R.; Persson, Jan E. U.; Wolswijk, Richard G. H.; Ihse, Ingmar; Schijven, Marlies P.; Verheijen, René H. M.
2014-01-01
Objective. With the increase in robotic-assisted laparoscopic surgery there is a concomitant rising demand for training methods. The objective was to establish face and construct validity of a novel virtual reality simulator (dV-Trainer, Mimic Technologies, Seattle, WA) for the use in training of robot-assisted surgery. Methods. A comparative cohort study was performed. Participants (n = 42) were divided into three groups according to their robotic experience. To determine construct validity, participants performed three different exercises twice. Performance parameters were measured. To determine face validity, participants filled in a questionnaire after completion of the exercises. Results. Experts outperformed novices in most of the measured parameters. The most discriminative parameters were “time to complete” and “economy of motion” (P < 0.001). The training capacity of the simulator was rated 4.6 ± 0.5 SD on a 5-point Likert scale. The realism of the simulator in general, visual graphics, movements of instruments, interaction with objects, and the depth perception were all rated as being realistic. The simulator is considered to be a very useful training tool for residents and medical specialist starting with robotic surgery. Conclusions. Face and construct validity for the dV-Trainer could be established. The virtual reality simulator is a useful tool for training robotic surgery. PMID:24600328
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters
Liu, Kwang-Ming; Chin, Chien-Pang; Chen, Chun-Hui; Chang, Jui-Han
2015-01-01
The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ’) and the vital parameters, maximum age (Tmax), age at maturity (Tm), annual fecundity (f/Rc)), size at birth (Lb), size at maturity (Lm), and asymptotic length (L∞). Group (1) included species with slow growth rates (0.034 yr-1 < k < 0.103 yr-1) and extended longevity (26 yr < Tmax < 81 yr), e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2) included species with fast growth rates (0.103 yr-1 < k < 0.358 yr-1) and short longevity (9 yr < Tmax < 26 yr), e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3) included late maturing species (Lm/L∞ ≧ 0.75) with moderate longevity (Tmax < 29 yr), e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ’ values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures. PMID:26576058
Validation of the Spatial Accuracy of the ExacTracRTM Adaptive Gating System
NASA Astrophysics Data System (ADS)
Twork, Gregory
Stereotactic body radiation therapy (SBRT) is a method of treatment that is used in extracranial locations, including the abdominal and thoracic cavities, as well as spinal and paraspinal locations. At the McGill University Health Centre, liver SBRT treatments include gating, which places the treatment beam on a duty cycle controlled by tracking of fiducial markers moving with the patient's breathing cycle. Respiratory gated treatments aim to spare normal tissue, while delivering a dose properly to a moving target. The ExacTracRTM system (BrainLAB AG Germany) is an image-guided radiotherapy system consisting of a combination of infra-red (IR) cameras and dual kilovoltage (kV) X-ray tubes. The IR system is used to track patient positioning and respiratory motion, while the kV X-rays are used to determine a positional shift based on internal anatomy or fiducial markers. In order to validate the system's ability to treat under gating conditions, each step of the SBRT process was evaluated quantitatively. Initially the system was tested under ideal static conditions, followed by a study including gated parameters. The uncertainties of the isocenters, positioning algorithm, planning computed tomography (CT) and four dimensional CT (4DCT) scans, gating window size and tumor motion were evaluated for their contributions to the total uncertainty in treatment. The mechanical isocenter and 4DCT were found to be the largest sources of uncertainty. However, for tumors with large internal amplitudes (>2.25 cm) that are treated with large gating windows (>30%) the gating parameters can contribute more than 1.1 +/- 1.8 mm.
Ruling out Legionella in community-acquired pneumonia.
Haubitz, Sebastian; Hitz, Fabienne; Graedel, Lena; Batschwaroff, Marcus; Wiemken, Timothy Lee; Peyrani, Paula; Ramirez, Julio A; Fux, Christoph Andreas; Mueller, Beat; Schuetz, Philipp
2014-10-01
Assessing the likelihood for Legionella sp. in community-acquired pneumonia is important because of differences in treatment regimens. Currently used antigen tests and culture have limited sensitivity with important time delays, making empirical broad-spectrum coverage necessary. Therefore, a score with 6 variables recently has been proposed. We sought to validate these parameters in an independent cohort. We analyzed adult patients with community-acquired pneumonia from a large multinational database (Community Acquired Pneumonia Organization) who were treated between 2001 and 2012 with more than 4 of the 6 prespecified clinical variables available. Association and discrimination were assessed using logistic regression analysis and area under the curve (AUC). Of 1939 included patients, the infectious cause was known in 594 (28.9%), including Streptococcus pneumoniae in 264 (13.6%) and Legionella sp. in 37 (1.9%). The proposed clinical predictors fever, cough, hyponatremia, lactate dehydrogenase, C-reactive protein, and platelet count were all associated or tended to be associated with Legionella cause. A logistic regression analysis including all these predictors showed excellent discrimination with an AUC of 0.91 (95% confidence interval, 0.87-0.94). The original dichotomized score showed good discrimination (AUC, 0.73; 95% confidence interval, 0.65-0.81) and a high negative predictive value of 99% for patients with less than 2 parameters present. With the use of a large independent patient sample from an international database, this analysis validates previously proposed clinical variables to accurately rule out Legionella sp., which may help to optimize initial empiric therapy. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nepal, S.
2016-12-01
The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.
Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.
2015-03-01
Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less
NASA Technical Reports Server (NTRS)
Davis, Brian; Turner, Travis L.; Seelecke, Stefan
2008-01-01
An experimental and numerical investigation into the static and dynamic responses of shape memory alloy hybrid composite (SMAHC) beams is performed to provide quantitative validation of a recently commercialized numerical analysis/design tool for SMAHC structures. The SMAHC beam specimens consist of a composite matrix with embedded pre-strained SMA actuators, which act against the mechanical boundaries of the structure when thermally activated to adaptively stiffen the structure. Numerical results are produced from the numerical model as implemented into the commercial finite element code ABAQUS. A rigorous experimental investigation is undertaken to acquire high fidelity measurements including infrared thermography and projection moire interferometry for full-field temperature and displacement measurements, respectively. High fidelity numerical results are also obtained from the numerical model and include measured parameters, such as geometric imperfection and thermal load. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Global Aerosol Remote Sensing from MODIS
NASA Technical Reports Server (NTRS)
Ichoku, Charles; Kaufman, Yoram J.; Remer, Lorraine A.; Chu, D. Allen; Mattoo, Shana; Tanre, Didier; Levy, Robert; Li, Rong-Rong; Martins, Jose V.; Lau, William K. M. (Technical Monitor)
2002-01-01
The physical characteristics, composition, abundance, spatial distribution and dynamics of global aerosols are still very poorly known, and new data from satellite sensors have long been awaited to improve current understanding and to give a boost to the effort in future climate predictions. The derivation of aerosol parameters from the MODerate resolution Imaging Spectro-radiometer (MODIS) sensors aboard the Earth Observing System (EOS) Terra and Aqua polar-orbiting satellites ushers in a new era in aerosol remote sensing from space. Terra and Aqua were launched on December 18, 1999 and May 4, 2002 respectively, with daytime equator crossing times of approximately 10:30 am and 1:30 pm respectively. Several aerosol parameters are retrieved at 10-km spatial resolution (level 2) from MODIS daytime data. The MODIS aerosol algorithm employs different approaches to retrieve parameters over land and ocean surfaces, because of the inherent differences in the solar spectral radiance interaction with these surfaces. The parameters retrieved include: aerosol optical thickness (AOT) at 0.47, 0.55 and 0.66 micron wavelengths over land, and at 0.47, 0.55, 0.66, 0.87, 1.2, 1.6, and 2.1 micron over ocean; Angstrom exponent over land and ocean; and effective radii, and the proportion of AOT contributed by the small mode aerosols over ocean. To ensure the quality of these parameters, a substantial part of the Terra-MODIS aerosol products were validated globally and regionally, based on cross correlation with corresponding parameters derived from ground-based measurements from AERONET (AErosol RObotic NETwork) sun photometers. Similar validation efforts are planned for the Aqua-MODIS aerosol products. The MODIS level 2 aerosol products are operationally aggregated to generate global daily, eight-day (weekly), and monthly products at one-degree spatial resolution (level 3). MODIS aerosol data are used for the detailed study of local, regional, and global aerosol concentration, distribution, and temporal dynamics, as well as for radiative forcing calculations. We show several examples of these results and comparisons with model output.
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Aerosol profiling during the large scale field campaign CINDI-2
NASA Astrophysics Data System (ADS)
Apituley, Arnoud; Roozendael, Michel Van; Richter, Andreas; Wagner, Thomas; Friess, Udo; Hendrick, Francois; Kreher, Karin; Tirpitz, Jan-Lukas
2018-04-01
For the validation of space borne observations of NO2 and other trace gases from hyperspectral imagers, ground based instruments based on the MAXDOAS technique are an excellent choice, since they rely on similar retrieval techniques as the observations from orbit. To ensure proper traceability of the MAXDOAS observations, a thorough validation and intercomparison is mandatory. Advanced MAXDOAS observation and retrieval techniques enable inferring vertical structure of trace gases and aerosols. These techniques and their results need validation by e.g. lidar techniques. For the proper understanding of the results from passive remote sensing techniques, independent observations are needed that include parameters needed to understand the light paths, i.e. in-situ aerosol observations of optical and microphysical properties, and essential are in particular the vertical profiles of aerosol optical properties by (Raman) lidar. The approach used in the CINDI-2 campaign held in Cabauw in 2016 is presented in this paper and the results will be discussed in the presentation at the conference.
Validation of tungsten cross sections in the neutron energy region up to 100 keV
NASA Astrophysics Data System (ADS)
Pigni, Marco T.; Žerovnik, Gašper; Leal, Luiz. C.; Trkov, Andrej
2017-09-01
Following a series of recent cross section evaluations on tungsten isotopes performed at Oak Ridge National Laboratory (ORNL), this paper presents the validation work carried out to test the performance of the evaluated cross sections based on lead-slowing-down (LSD) benchmarks conducted in Grenoble. ORNL completed the resonance parameter evaluation of four tungsten isotopes - 182,183,184,186W - in August 2014 and submitted it as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The evaluations were performed with support from the US Nuclear Criticality Safety Program in an effort to provide improved tungsten cross section and covariance data for criticality safety sensitivity analyses. The validation analysis based on the LSD benchmarks showed an improved agreement with the experimental response when the ORNL tungsten evaluations were included in the ENDF/B-VII.1 library. Comparison with the results obtained with the JEFF-3.2 nuclear data library are also discussed.
Assessment Methodology for Process Validation Lifecycle Stage 3A.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana
2017-07-01
The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.
exprso: an R-package for the rapid implementation of machine learning algorithms.
Quinn, Thomas; Tylee, Daniel; Glatt, Stephen
2016-01-01
Machine learning plays a major role in many scientific investigations. However, non-expert programmers may struggle to implement the elaborate pipelines necessary to build highly accurate and generalizable models. We introduce exprso , a new R package that is an intuitive machine learning suite designed specifically for non-expert programmers. Built initially for the classification of high-dimensional data, exprso uses an object-oriented framework to encapsulate a number of common analytical methods into a series of interchangeable modules. This includes modules for feature selection, classification, high-throughput parameter grid-searching, elaborate cross-validation schemes (e.g., Monte Carlo and nested cross-validation), ensemble classification, and prediction. In addition, exprso also supports multi-class classification (through the 1-vs-all generalization of binary classifiers) and the prediction of continuous outcomes.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
Non-linear identification of a squeeze-film damper
NASA Technical Reports Server (NTRS)
Stanway, Roger; Mottershead, John; Firoozian, Riaz
1987-01-01
Described is an experimental study to identify the damping laws associated with a squeeze-film vibration damper. This is achieved by using a non-linear filtering algorithm to process displacement responses of the damper ring to synchronous excitation and thus to estimate the parameters in an nth-power velocity model. The experimental facility is described in detail and a representative selection of results is included. The identified models are validated through the prediction of damper-ring orbits and comparison with observed responses.
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
Accomplishments are described for the second year effort of a 3-year program to develop methodology for component specific modeling of aircraft engine hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models; (2) geometry model generators; (3) remeshing; (4) specialty 3-D inelastic stuctural analysis; (5) computationally efficient solvers, (6) adaptive solution strategies; (7) engine performance parameters/component response variables decomposition and synthesis; (8) integrated software architecture and development, and (9) validation cases for software developed.
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.
Integrable Time-Dependent Quantum Hamiltonians
NASA Astrophysics Data System (ADS)
Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen
2018-05-01
We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.
2011-10-11
developed a method for determining the structure (component logs and their 3D place- ment) of a LINCOLN LOG assembly from a single image from an uncalibrated...small a class of components. Moreover, we focus on determining the precise pose and structure of an assembly, including the 3D pose of each...medial axes are parallel to the work surface. Thus valid structures Fig. 1. The 3D geometric shape parameters of LINCOLN LOGS. have logs on
Component-specific modeling. [jet engine hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.
1992-01-01
Accomplishments are described for a 3 year program to develop methodology for component-specific modeling of aircraft hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models, (2) geometry model generators, (3) remeshing, (4) specialty three-dimensional inelastic structural analysis, (5) computationally efficient solvers, (6) adaptive solution strategies, (7) engine performance parameters/component response variables decomposition and synthesis, (8) integrated software architecture and development, and (9) validation cases for software developed.
Density matrix Monte Carlo modeling of quantum cascade lasers
NASA Astrophysics Data System (ADS)
Jirauschek, Christian
2017-10-01
By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.
NASA Technical Reports Server (NTRS)
Kranz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.
Microeconomics of 300-mm process module control
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.
2001-08-01
Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.
Parasitic Parameters Extraction for InP DHBT Based on EM Method and Validation up to H-Band
NASA Astrophysics Data System (ADS)
Li, Oupeng; Zhang, Yong; Wang, Lei; Xu, Ruimin; Cheng, Wei; Wang, Yuan; Lu, Haiyan
2017-05-01
This paper presents a small-signal model for InGaAs/InP double heterojunction bipolar transistor (DHBT). Parasitic parameters of access via and electrode finger are extracted by 3-D electromagnetic (EM) simulation. By analyzing the equivalent circuit of seven special structures and using the EM simulation results, the parasitic parameters are extracted systematically. Compared with multi-port s-parameter EM model, the equivalent circuit model has clear physical intension and avoids the complex internal ports setting. The model is validated on a 0.5 × 7 μm2 InP DHBT up to 325 GHz. The model provides a good fitting result between measured and simulated multi-bias s-parameters in full band. At last, an H-band amplifier is designed and fabricated for further verification. The measured amplifier performance is highly agreed with the model prediction, which indicates the model has good accuracy in submillimeterwave band.
NASA Technical Reports Server (NTRS)
Perez, Jose G.; Parks, Russel, A.; Lazor, Daniel R.
2012-01-01
The slosh dynamics of propellant tanks can be represented by an equivalent mass-pendulum-dashpot mechanical model. The parameters of this equivalent model, identified as slosh mechanical model parameters, are slosh frequency, slosh mass, and pendulum hinge point location. They can be obtained by both analysis and testing for discrete fill levels. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random excitation testing and free-decay testing, are performed to validate the slosh mechanical model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures were used to extract the parameters from the experimental data. Test setup of sub-scale tanks will be described. A comparison between experimental results and analysis will be presented.
NASA Technical Reports Server (NTRS)
Perez, Jose G.; Parks, Russel A.; Lazor, Daniel R.
2012-01-01
The slosh dynamics of propellant tanks can be represented by an equivalent pendulum-mass mechanical model. The parameters of this equivalent model, identified as slosh model parameters, are slosh mass, slosh mass center of gravity, slosh frequency, and smooth-wall damping. They can be obtained by both analysis and testing for discrete fill heights. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random testing and free-decay testing, are performed to validate the slosh model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures are used to extract the parameters from the experimental data. Test setup of sub-scale test articles of cylindrical and spherical shapes will be described. A comparison between experimental results and analysis will be presented.
Yanai, Toshimasa; Matsuo, Akifumi; Maeda, Akira; Nakamoto, Hiroki; Mizutani, Mirai; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2017-08-01
We developed a force measurement system in a soil-filled mound for measuring ground reaction forces (GRFs) acting on baseball pitchers and examined the reliability and validity of kinetic and kinematic parameters determined from the GRFs. Three soil-filled trays of dimensions that satisfied the official baseball rules were fixed onto 3 force platforms. Eight collegiate pitchers wearing baseball shoes with metal cleats were asked to throw 5 fastballs with maximum effort from the mound toward a catcher. The reliability of each parameter was determined for each subject as the coefficient of variation across the 5 pitches. The validity of the measurements was tested by comparing the outcomes either with the true values or the corresponding values computed from a motion capture system. The coefficients of variation in the repeated measurements of the peak forces ranged from 0.00 to 0.17, and were smaller for the pivot foot than the stride foot. The mean absolute errors in the impulses determined over the entire duration of pitching motion were 5.3 N˙s, 1.9 N˙s, and 8.2 N˙s for the X-, Y-, and Z-directions, respectively. These results suggest that the present method is reliable and valid for determining selected kinetic and kinematic parameters for analyzing pitching performance.
Validation of Bayesian analysis of compartmental kinetic models in medical imaging.
Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M
2016-10-01
Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lim, Teik-Cheng; Dawson, James Alexander
2018-05-01
This study explores the close-range, short-range and long-range relationships between the parameters of the Morse and Buckingham potential energy functions. The results show that the close-range and short-range relationships are valid for bond compression and for very small changes in bond length, respectively, while the long-range relationship is valid for bond stretching. A wide-range relationship is proposed to combine the comparative advantages of the close-range, short-range and long-range parameter relationships. The wide-range relationship is useful for replacing the close-range, short-range and long-range parameter relationships, thereby preventing the undesired effects of potential energy jumps resulting from functional switching between the close-range, short-range and long-range interaction energies.
Subbiah, Ishwaria M; Lei, Xiudong; Weinberg, Jeffrey S; Sulman, Erik P; Chavez-MacGregor, Mariana; Tripathy, Debu; Gupta, Rohan; Varma, Ankur; Chouhan, Jay; Guevarra, Richard P; Valero, Vicente; Gilbert, Mark R; Gonzalez-Angulo, Ana M
2015-07-10
Several indices have been developed to predict overall survival (OS) in patients with breast cancer with brain metastases, including the breast graded prognostic assessment (breast-GPA), comprising age, tumor subtype, and Karnofsky performance score. However, number of brain metastases-a highly relevant clinical variable-is less often incorporated into the final model. We sought to validate the existing breast-GPA in an independent larger cohort and refine it integrating number of brain metastases. Data were retrospectively gathered from a prospectively maintained institutional database. Patients with newly diagnosed brain metastases from 1996 to 2013 were identified. After validating the breast-GPA, multivariable Cox regression and recursive partitioning analysis led to the development of the modified breast-GPA. The performances of the breast-GPA and modified breast-GPA were compared using the concordance index. In our cohort of 1,552 patients, the breast-GPA was validated as a prognostic tool for OS (P < .001). In multivariable analysis of the breast-GPA and number of brain metastases (> three v ≤ three), both were independent predictors of OS. We therefore developed the modified breast-GPA integrating a fourth clinical parameter. Recursive partitioning analysis reinforced the prognostic significance of these four factors. Concordance indices were 0.78 (95% CI, 0.77 to 0.80) and 0.84 (95% CI, 0.83 to 0.85) for the breast-GPA and modified breast-GPA, respectively (P < .001). The modified breast-GPA incorporates four simple clinical parameters of high prognostic significance. This index has an immediate role in the clinic as a formative part of the clinician's discussion of prognosis and direction of care and as a potential patient selection tool for clinical trials. © 2015 by American Society of Clinical Oncology.
Fernandez-Calle, Pilar; Pelaz, Sandra; Oliver, Paloma; Alcaide, Maria Jose; Gomez-Rioja, Ruben; Buno, Antonio; Iturzaeta, Jose Manuel
2013-01-01
Introduction Technological innovation requires the laboratories to ensure that modifications or incorporations of new techniques do not alter the quality of their results. In an ISO 15189 accredited laboratory, flexible scope accreditation facilitates the inclusion of these changes prior to accreditation body evaluation. A strategy to perform the validation of a biochemistry analyzer in an accredited laboratory having a flexible scope is shown. Materials and methods: A validation procedure including the evaluation of imprecision and bias of two Dimension Vista analysers 1500 was conducted. Comparability of patient results between one of them and the lately replaced Dimension RxL Max was evaluated. All studies followed the respective Clinical and Laboratory Standards Institute (CLSI) protocols. 30 chemistry assays were studied. Coefficients of variation, percent bias and total error were calculated for all tests and biological variation was considered as acceptance criteria. Quality control material and patient samples were used as test materials. Interchangeability of the results was established by processing forty patients’ samples in both devices. Results: 27 of the 30 studied parameters met allowable performance criteria. Sodium, chloride and magnesium did not fulfil acceptance criteria. Evidence of interchangeability of patient results was obtained for all parameters except magnesium, NT-proBNP, cTroponin I and C-reactive protein. Conclusions: A laboratory having a well structured and documented validation procedure can opt to get a flexible scope of accreditation. In addition, performing these activities prior to use on patient samples may evidence technical issues which must be corrected to minimize their impact on patient results. PMID:23457769
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.
Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît
2009-01-01
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricks, Allen; Blanchat, Thomas K.; Jernigan, Dann A.
2006-06-01
It is necessary to improve understanding and develop validation data of the heat flux incident to an object located within the fire plume for the validation of SIERRA/ FUEGO/SYRINX fire and SIERRA/CALORE. One key aspect of the validation data sets is the determination of the relative contribution of the radiative and convective heat fluxes. To meet this objective, a cylindrical calorimeter with sufficient instrumentation to measure total and radiative heat flux had been designed and fabricated. This calorimeter will be tested both in the controlled radiative environment of the Penlight facility and in a fire environment in the FLAME/Radiant Heatmore » (FRH) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisons between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. A significant question of interest to modeling heat flux incident to an object in or near a fire is the contribution of the radiation and convection modes of heat transfer. The series of experiments documented in this test plan is designed to provide data on the radiation partitioning, defined as the fraction of the total heat flux that is due to radiation.« less
Performance Validation Approach for the GTX Air-Breathing Launch Vehicle
NASA Technical Reports Server (NTRS)
Trefny, Charles J.; Roche, Joseph M.
2002-01-01
The primary objective of the GTX effort is to determine whether or not air-breathing propulsion can enable a launch vehicle to achieve orbit in a single stage. Structural weight, vehicle aerodynamics, and propulsion performance must be accurately known over the entire flight trajectory in order to make a credible assessment. Structural, aerodynamic, and propulsion parameters are strongly interdependent, which necessitates a system approach to design, evaluation, and optimization of a single-stage-to-orbit concept. The GTX reference vehicle serves this purpose, by allowing design, development, and validation of components and subsystems in a system context. The reference vehicle configuration (including propulsion) was carefully chosen so as to provide high potential for structural and volumetric efficiency, and to allow the high specific impulse of air-breathing propulsion cycles to be exploited. Minor evolution of the configuration has occurred as analytical and experimental results have become available. With this development process comes increasing validation of the weight and performance levels used in system performance determination. This paper presents an overview of the GTX reference vehicle and the approach to its performance validation. Subscale test rigs and numerical studies used to develop and validate component performance levels and unit structural weights are outlined. The sensitivity of the equivalent, effective specific impulse to key propulsion component efficiencies is presented. The role of flight demonstration in development and validation is discussed.
Selection of Surrogate Bacteria for Use in Food Safety Challenge Studies: A Review.
Hu, Mengyi; Gurtler, Joshua B
2017-09-01
Nonpathogenic surrogate bacteria are prevalently used in a variety of food challenge studies in place of foodborne pathogens such as Listeria monocytogenes, Salmonella, Escherichia coli O157:H7, and Clostridium botulinum because of safety and sanitary concerns. Surrogate bacteria should have growth characteristics and/or inactivation kinetics similar to those of target pathogens under given conditions in challenge studies. It is of great importance to carefully select and validate potential surrogate bacteria when verifying microbial inactivation processes. A validated surrogate responds similar to the targeted pathogen when tested for inactivation kinetics, growth parameters, or survivability under given conditions in agreement with appropriate statistical analyses. However, a considerable number of food studies involving putative surrogate bacteria lack convincing validation sources or adequate validation processes. Most of the validation information for surrogates in these studies is anecdotal and has been collected from previous publications but may not be sufficient for given conditions in the study at hand. This review is limited to an overview of select studies and discussion of the general criteria and approaches for selecting potential surrogate bacteria under given conditions. The review also includes a list of documented bacterial pathogen surrogates and their corresponding food products and treatments to provide guidance for future studies.
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P. A.; Brown, C.
2015-12-01
A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.
NASA Astrophysics Data System (ADS)
Morsdorf, F.; Meier, E.; Koetz, B.; Nüesch, D.; Itten, K.; Allgöwer, B.
2003-04-01
The potential of airborne laserscanning for mapping forest stands has been intensively evaluated in the past few years. Algorithms deriving structural forest parameters in a stand-wise manner from laser data have been successfully implemented by a number of researchers. However, with very high point density laser (>20 points/m^2) data we pursue the approach of deriving these parameters on a single-tree basis. We explore the potential of delineating single trees from laser scanner raw data (x,y,z- triples) and validate this approach with a dataset of more than 2000 georeferenced trees, including tree height and crown diameter, gathered on a long term forest monitoring site by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). The accuracy of the laser scanner is evaluated trough 6 reference targets, being 3x3 m^2 in size and horizontally plain, for validating both the horizontal and vertical accuracy of the laser scanner by matching of triangular irregular networks (TINs). Single trees are segmented by a clustering analysis in all three coordinate dimensions and their geometric properties can then be derived directly from the tree cluster.
Savelyev, Alexey; MacKerell, Alexander D.
2015-01-01
In the present study we report on interactions of and competition between monovalent ions for two DNA sequences in MD simulations. Efforts included the development and validation of parameters for interactions among the first-group monovalent cations, Li+, Na+, K+ and Rb+, and DNA in the Drude polarizable and additive CHARMM36 force fields (FF). The optimization process targeted gas-phase QM interaction energies of various model compounds with ions and osmotic pressures of bulk electrolyte solutions of chemically relevant ions. The optimized ionic parameters are validated against counterion condensation theory and buffer exchange-atomic emission spectroscopy measurements providing quantitative data on the competitive association of different monovalent ions with DNA. Comparison between experimental and MD simulation results demonstrates that, compared to the additive CHARMM36 model, the Drude FF provides an improved description of the general features of the ionic atmosphere around DNA and leads to closer agreement with experiment on the ionic competition within the ion atmosphere. Results indicate the importance of extended simulation systems on the order of 25 Å beyond the DNA surface to obtain proper convergence of ion distributions. PMID:25751286
Artificial neural networks for modeling ammonia emissions released from sewage sludge composting
NASA Astrophysics Data System (ADS)
Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.
2012-09-01
The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).
Baule, A; Evans, R M L; Olmsted, P D
2006-12-01
We revisit the paradigm of an ideal gas under isothermal conditions. A moving piston performs work on an ideal gas in a container that is strongly coupled to a heat reservoir. The thermal coupling is modeled by stochastic scattering at the boundaries. In contrast to recent studies of an adiabatic ideal gas with a piston [R.C. Lua and A.Y. Grosberg, J. Phys. Chem. B 109, 6805 (2005); I. Bena, Europhys. Lett. 71, 879 (2005)], the container and piston stay in contact with the heat bath during the work process. Under this condition the heat reservoir as well as the system depend on the work parameter lambda and microscopic reversibility is broken for a moving piston. Our model is thus not included in the class of systems for which the nonequilibrium work theorem has been derived rigorously either by Hamiltonian [C. Jarzynski, J. Stat. Mech. (2004) P09005] or stochastic methods [G.E. Crooks, J. Stat. Phys. 90, 1481 (1998)]. Nevertheless the validity of the nonequilibrium work theorem is confirmed both numerically for a wide range of parameter values and analytically in the limit of a very fast moving piston, i.e., in the far nonequilibrium regime.
Importance of accurately assessing biomechanics of the cornea.
Roberts, Cynthia J
2016-07-01
This article summarizes the state-of-the-art in clinical corneal biomechanics, including procedures in which biomechanics play a role, and the clinical consequences in terms of error in estimating intraocular pressure (IOP). Corneal biomechanical response to refractive surgery can be categorized into either stable alteration of surface shape and thus visual outcome, or unstable biomechanical decompensation. The stable response is characterized by central flattening and peripheral steepening that is potentiated in a stiffer cornea. Two clinical devices for assessing corneal biomechanics do not yet measure classic biomechanical properties, but rather provide assessment of corneal deformation response. Biomechanical parameters are a function of IOP, and both the cornea and sclera become stiffer as IOP increases. Any assessment of biomechanical parameters must include IOP, and one value of stiffness does not adequately characterize a cornea. Corneal biomechanics plays a role in the outcomes of any procedure in which lamellae are transected. Once the corneal structure has been altered in a manner that includes central thinning, IOP measurements with applanation tonometry are likely not valid, and other technologies should be used.
Folmsbee, Martha
2015-01-01
Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.
[Validation and verfication of microbiology methods].
Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción
2015-01-01
Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Validation study and routine control monitoring of moist heat sterilization procedures.
Shintani, Hideharu
2012-06-01
The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.
Gebremariam, Mekdes K; Vaqué-Crusellas, Cristina; Andersen, Lene F; Stok, F Marijn; Stelmach-Mardas, Marta; Brug, Johannes; Lien, Nanna
2017-02-14
Comprehensive and psychometrically tested measures of availability and accessibility of food are needed in order to explore availability and accessibility as determinants and predictors of dietary behaviors. The main aim of this systematic review was to update the evidence regarding the psychometric properties of measures of food availability and accessibility among youth. A secondary objective was to assess how availability and accessibility were conceptualized in the included studies. A systematic literature search was conducted using Medline, Embase, PsycINFO and Web of Science. Methodological studies published between January 2010 and March 2016 and reporting on at least one psychometric property of a measure of availability and/or accessibility of food among youth were included. Two reviewers independently extracted data and assessed study quality. Existing criteria were used to interpret reliability and validity parameters. A total of 20 studies were included. While 16 studies included measures of food availability, three included measures of both availability and accessibility; one study included a measure of accessibility only. Different conceptualizations of availability and accessibility were used across the studies. The measures aimed at assessing availability and/or accessibility in the home environment (n = 11), the school (n = 4), stores (n = 3), childcare/early care and education services (n = 2) and restaurants (n = 1). Most studies followed systematic steps in the development of the measures. The most common psychometrics tested for these measures were test-retest reliability and criterion validity. The majority of the measures had satisfactory evidence of reliability and/or validity. None of the included studies assessed the responsiveness of the measures. The review identified several measures of food availability or accessibility among youth with satisfactory evidence of reliability and/or validity. Findings indicate a need for more studies including measures of accessibility and addressing its conceptualization. More testing of some of the identified measures in different population groups is also warranted, as is the development of more measures of food availability and accessibility in the broader environment such as the neighborhood food environment.
Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin
NASA Astrophysics Data System (ADS)
Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique
2016-05-01
In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated against Acoustic Doppler current profiler (ADCP) cross sections with an accuracy of more than 90%. Altimetry measurements are routinely delivered within a few days, and this RC data set provides a simple and cost-effective tool for predicting discharge throughout the basin in nearly real time.
Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez
2017-02-28
Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10 -10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale.
Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators
NASA Technical Reports Server (NTRS)
Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)
2002-01-01
Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.
Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez
2017-01-01
Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10−10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale. PMID:28772605
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms - Part II.
Setia, Maninder Singh
2017-01-01
This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources.
Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms – Part II
Setia, Maninder Singh
2017-01-01
This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources. PMID:28584367
Deichmann Nielsen, Lea; Bech, Per; Hounsgaard, Lise; Alkier Gildberg, Frederik
2017-08-01
Unstructured risk assessment, as well as confounders (underlying reasons for the patient's risk behaviour and alliance), risk behaviour, and parameters of alliance, have been identified as factors that prolong the duration of mechanical restraint among forensic mental health inpatients. To clinically validate a new, structured short-term risk assessment instrument called the Mechanical Restraint-Confounders, Risk, Alliance Score (MR-CRAS), with the intended purpose of supporting the clinicians' observation and assessment of the patient's readiness to be released from mechanical restraint. The content and layout of MR-CRAS and its user manual were evaluated using face validation by forensic mental health clinicians, content validation by an expert panel, and pilot testing within two, closed forensic mental health inpatient units. The three sub-scales (Confounders, Risk, and a parameter of Alliance) showed excellent content validity. The clinical validations also showed that MR-CRAS was perceived and experienced as a comprehensible, relevant, comprehensive, and useable risk assessment instrument. MR-CRAS contains 18 clinically valid items, and the instrument can be used to support the clinical decision-making regarding the possibility of releasing the patient from mechanical restraint. The present three studies have clinically validated a short MR-CRAS scale that is currently being psychometrically tested in a larger study.
Break and trend analysis of EUMETSAT Climate Data Records
NASA Astrophysics Data System (ADS)
Doutriaux-Boucher, Marie; Zeder, Joel; Lattanzio, Alessio; Khlystova, Iryna; Graw, Kathrin
2016-04-01
EUMETSAT reprocessed imagery acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board Meteosat 8-9. The data covers the period from 2004 to 2012. Climate Data Records (CDRs) of atmospheric parameters such as Atmospheric Motion Vectors (AMV) as well as Clear and All Sky Radiances (CSR and ASR) have been generated. Such CDRs are mainly ingested by ECMWF to produce a reanalysis data. In addition, EUMETSAT produced a long CDR (1982-2004) of land surface albedo exploiting imagery acquired by the Meteosat Visible and Infrared Imager (MVIRI) on board Meteosat 2-7. Such CDR is key information in climate analysis and climate models. Extensive validation has been performed for the surface albedo record and a first validation of the winds and clear sky radiances have been done. All validation results demonstrated that the time series of all parameter appear homogeneous at first sight. Statistical science offers a variety of analyses methods that have been applied to further analyse the homogeneity of the CDRs. Many breakpoint analysis techniques depend on the comparison of two time series which incorporates the issue that both may have breakpoints. This paper will present a quantitative and statistical analysis of eventual breakpoints found in the MVIRI and SEVIRI CDRs that includes attribution of breakpoints to changes of instruments and other events in the data series compared. The value of different methods applied will be discussed with suggestions how to further develop this type of analysis for quality evaluation of CDRs.
NASA Technical Reports Server (NTRS)
Ouzounov, D.; Pulinets, S.; Davindenko, D.; Hattori, K.; Kafatos, M.; Taylor, P.
2012-01-01
We are conducting a scientific validation study involving multi-sensor observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several atmospheric and environmental parameters, which we found, are associated with the earthquakes, namely: thermal infrared radiation, outgoing long-wavelength radiation, ionospheric electron density, and atmospheric temperature and humidity. For first time we applied this approach to selected GEOSS sites prone to earthquakes or volcanoes. This provides a new opportunity to cross validate our results with the dense networks of in-situ and space measurements. We investigated two different seismic aspects, first the sites with recent large earthquakes, viz.- Tohoku-oki (M9, 2011, Japan) and Emilia region (M5.9, 2012,N. Italy). Our retrospective analysis of satellite data has shown the presence of anomalies in the atmosphere. Second, we did a retrospective analysis to check the re-occurrence of similar anomalous behavior in atmosphere/ionosphere over three regions with distinct geological settings and high seismicity: Taiwan, Japan and Kamchatka, which include 40 major earthquakes (M>5.9) for the period of 2005-2009. We found anomalous behavior before all of these events with no false negatives; false positives were less then 10%. Our initial results suggest that multi-instrument space-borne and ground observations show a systematic appearance of atmospheric anomalies near the epicentral area that could be explained by a coupling between the observed physical parameters and earthquake preparation processes.
NASA Astrophysics Data System (ADS)
Almazmumy, Mariam; Ebaid, Abdelhalim
2017-08-01
In this article, the flow and heat transfer of a non-Newtonian nanofluid between two coaxial cylinders through a porous medium has been investigated. The velocity, temperature, and nanoparticles concentration of the present mathematical model are governed by a system of nonlinear ordinary differential equations. The objective of this article is to obtain new exact solutions for the temperature and the nanoparticles concentration and, therefore, compare them with the previous approximate results in the literature. Moreover, the velocity equation has been numerically solved. The effects of the pressure gradient, thermophoresis, third-grade, Brownian motion, and porosity parameters on the included phenomena have been discussed through several tables and plots. It is found that the velocity profile is increased by increasing the pressure gradient parameter, thermophoresis parameter (slightly), third-grade parameter, and Brownian motion parameter (slightly); however, it decreases with an increase in the porosity parameter and viscosity power index. In addition, the temperature and the nanoparticles concentration reduce with the strengthen of the Brownian motion parameter, while they increase by increasing the thermophoresis parameter. Furthermore, the numerical solution and the physical interpretation in the literature for the same problem have been validated with the current exact analysis, where many remarkable differences and errors have been concluded. Therefore, the suggested analysis may be recommended with high trust for similar problems.
NASA Astrophysics Data System (ADS)
Zhao, Xiuliang; Cheng, Yong; Wang, Limei; Ji, Shaobo
2017-03-01
Accurate combustion parameters are the foundations of effective closed-loop control of engine combustion process. Some combustion parameters, including the start of combustion, the location of peak pressure, the maximum pressure rise rate and its location, can be identified from the engine block vibration signals. These signals often include non-combustion related contributions, which limit the prompt acquisition of the combustion parameters computationally. The main component in these non-combustion related contributions is considered to be caused by the reciprocating inertia force excitation (RIFE) of engine crank train. A mathematical model is established to describe the response of the RIFE. The parameters of the model are recognized with a pattern recognition algorithm, and the response of the RIFE is predicted and then the related contributions are removed from the measured vibration velocity signals. The combustion parameters are extracted from the feature points of the renovated vibration velocity signals. There are angle deviations between the feature points in the vibration velocity signals and those in the cylinder pressure signals. For the start of combustion, a system bias is adopted to correct the deviation and the error bound of the predicted parameters is within 1.1°. To predict the location of the maximum pressure rise rate and the location of the peak pressure, algorithms based on the proportion of high frequency components in the vibration velocity signals are introduced. Tests results show that the two parameters are able to be predicted within 0.7° and 0.8° error bound respectively. The increase from the knee point preceding the peak value point to the peak value in the vibration velocity signals is used to predict the value of the maximum pressure rise rate. Finally, a monitoring frame work is inferred to realize the combustion parameters prediction. Satisfactory prediction for combustion parameters in successive cycles is achieved, which validates the proposed methods.
Tsehaie, J; Poot, D H J; Oei, E H G; Verhaar, J A N; de Vos, R J
2017-07-01
To evaluate whether baseline MRI parameters provide prognostic value for clinical outcome, and to study correlation between MRI parameters and clinical outcome. Observational prospective cohort study. Patients with chronic midportion Achilles tendinopathy were included and performed a 16-week eccentric calf-muscle exercise program. Outcome measurements were the validated Victorian Institute of Sports Assessment-Achilles (VISA-A) questionnaire and MRI parameters at baseline and after 24 weeks. The following MRI parameters were assessed: tendon volume (Volume), tendon maximum cross-sectional area (CSA), tendon maximum anterior-posterior diameter (AP), and signal intensity (SI). Intra-class correlation coefficients (ICCs) and minimum detectable changes (MDCs) for each parameter were established in a reliability analysis. Twenty-five patients were included and complete follow-up was achieved in 20 patients. The average VISA-A scores increased significantly with 12.3 points (27.6%). The reliability was fair-good for all MRI-parameters with ICCs>0.50. Average tendon volume and CSA decreased significantly with 0.28cm 3 (5.2%) and 4.52mm 2 (4.6%) respectively. Other MRI parameters did not change significantly. None of the baseline MRI parameters were univariately associated with VISA-A change after 24 weeks. MRI SI increase over 24 weeks was positively correlated with the VISA-A score improvement (B=0.7, R 2 =0.490, p=0.02). Tendon volume and CSA decreased significantly after 24 weeks of conservative treatment. As these differences were within the MDC limits, they could be a result of a measurement error. Furthermore, MRI parameters at baseline did not predict the change in symptoms, and therefore have no added value in providing a prognosis in daily clinical practice. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
Modeling Piezoelectric Stack Actuators for Control of Micromanipulation
NASA Technical Reports Server (NTRS)
Goldfarb, Michael; Celanovic, Nikola
1997-01-01
A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.
Kramers, Cornelis; Derijks, Hieronymus J.; Wensing, Michel; Wetzels, Jack F. M.
2015-01-01
Background The Modification of Diet in Renal Disease (MDRD) formula is widely used in clinical practice to assess the correct drug dose. This formula is based on serum creatinine levels which might be influenced by chronic diseases itself or the effects of the chronic diseases. We conducted a systematic review to determine the validity of the MDRD formula in specific patient populations with renal impairment: elderly, hospitalized and obese patients, patients with cardiovascular disease, cancer, chronic respiratory diseases, diabetes mellitus, liver cirrhosis and human immunodeficiency virus. Methods and Findings We searched for articles in Pubmed published from January 1999 through January 2014. Selection criteria were (1) patients with a glomerular filtration rate (GFR) < 60 ml/min (/1.73m2), (2) MDRD formula compared with a gold standard and (3) statistical analysis focused on bias, precision and/or accuracy. Data extraction was done by the first author and checked by a second author. A bias of 20% or less, a precision of 30% or less and an accuracy expressed as P30% of 80% or higher were indicators of the validity of the MDRD formula. In total we included 27 studies. The number of patients included ranged from 8 to 1831. The gold standard and measurement method used varied across the studies. For none of the specific patient populations the studies provided sufficient evidence of validity of the MDRD formula regarding the three parameters. For patients with diabetes mellitus and liver cirrhosis, hospitalized patients and elderly with moderate to severe renal impairment we concluded that the MDRD formula is not valid. Limitations of the review are the lack of considering the method of measuring serum creatinine levels and the type of gold standard used. Conclusion In several specific patient populations with renal impairment the use of the MDRD formula is not valid or has uncertain validity. PMID:25741695
Kruger, Tillmann H C; Deiter, Frank; Zhang, Yuanyuan; Jung, Stefanie; Schippert, Cordula; Kahl, Kai G; Heinrichs, Markus; Schedlowski, Manfred; Hartmann, Uwe
2018-06-01
The neuropeptide oxytocin (OXT) has a variety of physiological functions in maternal behavior and attachment including sexual behavior. Based on animal research and our previous human studies, we set out to investigate intranasal administration of OXT and hypothesized that OXT should be able to modulate sexual function in women. In a double-blind, placebo-controlled, crossover laboratory setting, the acute effects of intranasal administered OXT (24 international units) on sexual drive, arousal, orgasm, and refractory aspects of sexual behavior were analyzed in 27 healthy females (mean age ± SD, 27.52 ± 8.04) together with physiological parameters using vaginal photoplethysmography. Oxytocin administration showed no effect on subjective sexual parameters (eg, postorgasmic tension; P = 0.051). Physiological parameters (vaginal photoplethysmography amplitude and vaginal blood volume) showed a response pattern towards sexual arousal but were not affected by OXT. Using a well-established laboratory paradigm, we did not find that intranasal OXT influences female sexual parameters. Also, sexual drive and other functions were not affected by OXT. These findings indicate that OXT is not able to significantly increase subjective and objective parameters of sexual function in a setting with high internal validity; however, this might be different in a more naturalistic setting.
Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery
NASA Astrophysics Data System (ADS)
Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin
2013-06-01
In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Results of an integrated structure/control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
Volkan-Salanci, Bilge; Aksoy, Hakan; Kiratli, Pınar Özgen; Tülümen, Erol; Güler, Nilüfer; Öksüzoglu, Berna; Tokgözoğlu, Lale; Erbaş, Belkıs; Alikaşifoğlu, Mehmet
2012-10-01
The aim of this prospective clinical study is to evaluate the relationship between changes in functional cardiac parameters following anthracycline therapy and carbonyl reductase 3 (CBR3p.V244M) and glutathione S transferase Pi (GSTP1p.I105V) polymorphisms. Seventy patients with normal cardiac function and no history of cardiac disease scheduled to undergo anthracycline chemotherapy were included in the study. The patients' cardiac function was evaluated by gated blood pool scintigraphy and echocardiography before and after chemotherapy, as well as 1 year following therapy. Gene polymorphisms were genotyped in 70 patients using TaqMan probes, validated by DNA sequencing. A deteriorating trend was observed in both systolic and diastolic parameters from GG to AA in CBR3p.V244M polymorphism. Patients with G-allele carriers of GSTP1p.I105V polymorphism were common (60%), with significantly decreased PFR compared to patiens with AA genotype. Variants of CBR3 and GSTP1 enzymes may be associated with changes in short-term functional cardiac parameters.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Glassy dynamics in three-dimensional embryonic tissues
Schötz, Eva-Maria; Lanio, Marcos; Talbot, Jared A.; Manning, M. Lisa
2013-01-01
Many biological tissues are viscoelastic, behaving as elastic solids on short timescales and fluids on long timescales. This collective mechanical behaviour enables and helps to guide pattern formation and tissue layering. Here, we investigate the mechanical properties of three-dimensional tissue explants from zebrafish embryos by analysing individual cell tracks and macroscopic mechanical response. We find that the cell dynamics inside the tissue exhibit features of supercooled fluids, including subdiffusive trajectories and signatures of caging behaviour. We develop a minimal, three-parameter mechanical model for these dynamics, which we calibrate using only information about cell tracks. This model generates predictions about the macroscopic bulk response of the tissue (with no fit parameters) that are verified experimentally, providing a strong validation of the model. The best-fit model parameters indicate that although the tissue is fluid-like, it is close to a glass transition, suggesting that small changes to single-cell parameters could generate a significant change in the viscoelastic properties of the tissue. These results provide a robust framework for quantifying and modelling mechanically driven pattern formation in tissues. PMID:24068179
An uncertainty model of acoustic metamaterials with random parameters
NASA Astrophysics Data System (ADS)
He, Z. C.; Hu, J. Y.; Li, Eric
2018-01-01
Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.
A System for Cost and Reimbursement Control in Hospitals
Fetter, Robert B.; Thompson, John D.; Mills, Ronald E.
1976-01-01
This paper approaches the design of a regional or statewide hospital rate-setting system as the underpinning of a larger system which permits a regulatory agency to satisfy the requirements of various public laws now on the books or in process. It aims to generate valid interinstitutional monitoring on the three parameters of cost, utilization, and quality review. Such an approach requires the extension of the usual departmental cost and budgeting system to include consideration of the mix of patients treated and the utilization of various resources, including patient days, in the treatment of these patients. A sampling framework for the application of process-based quality studies and the generation of selected performance measurements is also included. PMID:941461
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
Passive and iontophoretic transport through the skin polar pathway.
Li, S K; Peck, K D
2013-01-01
The purpose of the present article is to briefly recount the contributions of Prof. William I. Higuchi to the area of skin transport. These contributions include developing fundamental knowledge of the barrier properties of the stratum corneum, mechanisms of skin transport, concentration gradient across skin in topical drug applications that target the viable epidermal layer, and permeation enhancement by chemical and electrical means. The complex and changeable nature of the skin barrier makes it difficult to assess and characterize the critical parameters that influence skin permeation. The systematic and mechanistic approaches taken by Dr. Higuchi in studying these parameters provided fundamental knowledge in this area and had a measured and lasting influence upon this field of study. This article specifically reviews the validation and characterization of the polar permeation pathway, the mechanistic model of skin transport, the influence of the dermis on the target skin concentration concept, and iontophoretic transport across the polar pathway of skin including the effects of electroosmosis and electropermeabilization. © 2013 S. Karger AG, Basel.
Automated Prescription of Oblique Brain 3D MRSI
Ozhinsky, Eugene; Vigneron, Daniel B.; Chang, Susan M.; Nelson, Sarah J.
2012-01-01
Two major difficulties encountered in implementing Magnetic Resonance Spectroscopic Imaging (MRSI) in a clinical setting are limited coverage and difficulty in prescription. The goal of this project was to completely automate the process of 3D PRESS MRSI prescription, including placement of the selection box, saturation bands and shim volume, while maximizing the coverage of the brain. The automated prescription technique included acquisition of an anatomical MRI image, optimization of the oblique selection box parameters, optimization of the placement of OVS saturation bands, and loading of the calculated parameters into a customized 3D MRSI pulse sequence. To validate the technique and compare its performance with existing protocols, 3D MRSI data were acquired from 6 exams from 3 healthy volunteers. To assess the performance of the automated 3D MRSI prescription for patients with brain tumors, the data were collected from 16 exams from 8 subjects with gliomas. This technique demonstrated robust coverage of the tumor, high consistency of prescription and very good data quality within the T2 lesion. PMID:22692829
Improved silicon nitride for advanced heat engines
NASA Technical Reports Server (NTRS)
Yeh, H. C.; Wimmer, J. M.; Huang, H. H.; Rorabaugh, M. E.; Schienle, J.; Styhr, K. H.
1985-01-01
The AiResearch Casting Company baseline silicon nitride (92 percent GTE SN-502 Si sub 3 N sub 4 plus 6 percent Y sub 2 O sub 3 plus 2 percent Al sub 2 O sub 3) was characterized with methods that included chemical analysis, oxygen content determination, electrophoresis, particle size distribution analysis, surface area determination, and analysis of the degree of agglomeration and maximum particle size of elutriated powder. Test bars were injection molded and processed through sintering at 0.68 MPa (100 psi) of nitrogen. The as-sintered test bars were evaluated by X-ray phase analysis, room and elevated temperature modulus of rupture strength, Weibull modulus, stress rupture, strength after oxidation, fracture origins, microstructure, and density from quantities of samples sufficiently large to generate statistically valid results. A series of small test matrices were conducted to study the effects and interactions of processing parameters which included raw materials, binder systems, binder removal cycles, injection molding temperatures, particle size distribution, sintering additives, and sintering cycle parameters.
DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacCrann, N.; et al.
We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less
Rodríguez, Iván; Zambrano, Lysien; Manterola, Carlos
2016-04-01
Physiological parameters used to measure exercise intensity are oxygen uptake and heart rate. However, perceived exertion (PE) is a scale that has also been frequently applied. The objective of this study is to establish the criterion-related validity of PE scales in children during an incremental exercise test. Seven electronic databases were used. Studies aimed at assessing criterion-related validity of PE scales in healthy children during an incremental exercise test were included. Correlation coefficients were transformed into z-values and assessed in a meta-analysis by means of a fixed effects model if I2 was below 50% or a random effects model, if it was above 50%. wenty-five articles that studied 1418 children (boys: 49.2%) met the inclusion criteria. Children's average age was 10.5 years old. Exercise modalities included bike, running and stepping exercises. The weighted correlation coefficient was 0.835 (95% confidence interval: 0.762-0.887) and 0.874 (95% confidence interval: 0.794-0.924) for heart rate and oxygen uptake as reference criteria. The production paradigm and scales that had not been adapted to children showed the lowest measurement performance (p < 0.05). Measuring PE could be valid in healthy children during an incremental exercise test. Child-specific rating scales showed a better performance than those that had not been adapted to this population. Further studies with better methodological quality should be conducted in order to confirm these results. Sociedad Argentina de Pediatría.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollerach, R.; Leszczynski, F.; Fink, J.
2006-07-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less
QSAR models for degradation of organic pollutants in ozonation process under acidic condition.
Zhu, Huicen; Guo, Weimin; Shen, Zhemin; Tang, Qingli; Ji, Wenchao; Jia, Lijuan
2015-01-01
Although some researches about the degradation of organic pollutants have been carried out during recent years, reaction rate constants are available only for homologue compounds with similar structures or components. Therefore, it is of great significance to find a universal relationship between reaction rate and certain parameters of several diverse organic pollutants. In this study, removal ratio and kinetics of 33 kinds of organic substances were investigated by ozonation process, including azo dyes, heterocyclic compounds, ionic compounds and so on. Most quantum chemical parameters were conducted by using Gaussian 09 at the DFT B3LYP/6-311G level, including μ, q H(+), q(C)minq(C)max, ELUMO and EHOMO. Other descriptors, bond order (BO) as well as Fukui indices (f(+), f(-) and f(0)), were calculated by Material Studio 6.1 at Dmol(3)/GGA-BLYP/DNP(3.5) basis for each organic compound. The recommended model for predicting rate constants was lnk'=1.978-95.484f(0)x-3.350q(C)min+38.221f(+)x, which had the squared regression coefficient R(2)=0.763 and standard deviation SD=0.716. The results of t test and the Fisher test suggested that the model exhibited optimum stability. Also, the model was validated by internal and external validations. Recommended QSAR model showed that the highest f(0) value of main-chain carbons (f(0)x) is more closely related to lnk' than other quantum descriptors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lankisch, Paul Georg; Weber-Dany, Bettina; Hebel, Kathrin; Maisonneuve, Patrick; Lowenfels, Albert B
2009-06-01
Only severe acute pancreatitis requires treatment, according to the principles of intensive care medicine in an intensive care or intermediate care unit. The aim of the study was to define and evaluate a simple clinical algorithm for rapid initial identification of patients with a first attack of acute pancreatitis who do not require intensive care. This prospective study included 394 patients who were admitted to the Municipal Clinic of Lüneburg, Germany, between 1987 and 2003. From a number of parameters of disease severity on admission, 3 parameters that showed the strongest prediction of a nonsevere course (no rebound tenderness and/or guarding, normal hematocrit level, and normal serum creatinine level) were combined to form the harmless acute pancreatitis score (HAPS). The score then was validated in a German multicenter study including 452 patients between 2004 and 2006. In both the initial and the validation set, the HAPS correlated with a nonsevere course of the disease (P < .0001). The score correctly identified a harmless course in 200 (98%) of 204 patients. The HAPS enables identification, within approximately 30 minutes after admission, of patients with acute pancreatitis whose disease will run a mild course. The high level of accuracy of this test (98%) will allow physicians to identify patients quickly who do not require intensive care, and potentially those who will not require inpatient treatment at all. Thus, the HAPS may save substantial hospital costs.
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
NASA Astrophysics Data System (ADS)
Seiffert, Betsy R.; Ducrozet, Guillaume
2018-01-01
We examine the implementation of a wave-breaking mechanism into a nonlinear potential flow solver. The success of the mechanism will be studied by implementing it into the numerical model HOS-NWT, which is a computationally efficient, open source code that solves for the free surface in a numerical wave tank using the high-order spectral (HOS) method. Once the breaking mechanism is validated, it can be implemented into other nonlinear potential flow models. To solve for wave-breaking, first a wave-breaking onset parameter is identified, and then a method for computing wave-breaking associated energy loss is determined. Wave-breaking onset is calculated using a breaking criteria introduced by Barthelemy et al. (J Fluid Mech https://arxiv.org/pdf/1508.06002.pdf, submitted) and validated with the experiments of Saket et al. (J Fluid Mech 811:642-658, 2017). Wave-breaking energy dissipation is calculated by adding a viscous diffusion term computed using an eddy viscosity parameter introduced by Tian et al. (Phys Fluids 20(6): 066,604, 2008, Phys Fluids 24(3), 2012), which is estimated based on the pre-breaking wave geometry. A set of two-dimensional experiments is conducted to validate the implemented wave breaking mechanism at a large scale. Breaking waves are generated by using traditional methods of evolution of focused waves and modulational instability, as well as irregular breaking waves with a range of primary frequencies, providing a wide range of breaking conditions to validate the solver. Furthermore, adjustments are made to the method of application and coefficient of the viscous diffusion term with negligible difference, supporting the robustness of the eddy viscosity parameter. The model is able to accurately predict surface elevation and corresponding frequency/amplitude spectrum, as well as energy dissipation when compared with the experimental measurements. This suggests the model is capable of calculating wave-breaking onset and energy dissipation successfully for a wide range of breaking conditions. The model is also able to successfully calculate the transfer of energy between frequencies due to wave focusing and wave breaking. This study is limited to unidirectional waves but provides a valuable basis for future application of the wave-breaking model to a multidirectional wave field. By including parameters for removing energy due to wave-breaking into a nonlinear potential flow solver, the risk of developing numerical instabilities due to an overturning wave is decreased, thereby increasing the application range of the model, including calculating more extreme sea states. A computationally efficient and accurate model for the generation of a nonlinear random wave field is useful for predicting the dynamic response of offshore vessels and marine renewable energy devices, predicting loads on marine structures, and in the study of open ocean wave generation and propagation in a realistic environment.
Risk of malnutrition (over and under-nutrition): validation of the JaNuS screening tool.
Donini, Lorenzo M; Ricciardi, Laura Maria; Neri, Barbara; Lenzi, Andrea; Marchesini, Giulio
2014-12-01
Malnutrition (over and under-nutrition) is highly prevalent in patients admitted to hospital and it is a well-known risk factor for increased morbidity and mortality. Nutritional problems are often misdiagnosed, and especially the coexistence of over and undernutrition is not usually recognized. We aimed to develop and validate a screening tool for the easy detection and reporting of both undernutrition and overnutrition, specifically identifying the clinical conditions where the two types of malnutrition coexist. The study consisted of three phases: 1) selection of an appropriate study population (estimation sample) and of the hospital admission parameters to identify overnutrition and undernutrition; 2) combination of selected variables to create a screening tool to assess the nutritional risk in case of undernutrition, overnutrition, or the copresence of both the conditions, to be used by non-specialist health care professionals; 3) validation of the screening tool in a different patient sample (validation sample). Two groups of variables (12 for undernutrition, 7 for overnutrition) were identified in separate logistic models for their correlation with the outcome variables. Both models showed high efficacy, sensitivity and specificity (overnutrition, 97.7%, 99.6%, 66.6%, respectively; undernutrition, 84.4%, 83.6%, 84.8%). The logistic models were used to construct a two-faced test (named JaNuS - Just A Nutritional Screening) fitting into a two-dimension Cartesian coordinate graphic system. In the validation sample the JaNuS test confirmed its predictive value. Internal consistency and test-retest analysis provide evidence for the reliability of the test. The study provides a screening tool for the assessment of the nutritional risk, based on parameters easy-to-use by health care personnel lacking nutritional competence and characterized by excellent predictive validity. The test might be confidently applied in the clinical setting to determine the importance of malnutrition (including the copresence of over and undernutrition) as a risk factor for morbidity and mortality. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Collocation mismatch uncertainties in satellite aerosol retrieval validation
NASA Astrophysics Data System (ADS)
Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit
2018-02-01
Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the retrieval errors to the total uncertainty estimates including the CMU in the validation. We find that accounting for CMU increases the fraction of consistent observations.
Validation of buoyancy driven spectral tensor model using HATS data
NASA Astrophysics Data System (ADS)
Chougule, A.; Mann, J.; Kelly, M.; Larsen, G. C.
2016-09-01
We present a homogeneous spectral tensor model for wind velocity and temperature fluctuations, driven by mean vertical shear and mean temperature gradient. Results from the model, including one-dimensional velocity and temperature spectra and the associated co-spectra, are shown in this paper. The model also reproduces two-point statistics, such as coherence and phases, via cross-spectra between two points separated in space. Model results are compared with observations from the Horizontal Array Turbulence Study (HATS) field program (Horst et al. 2004). The spectral velocity tensor in the model is described via five parameters: the dissipation rate (ɛ), length scale of energy-containing eddies (L), a turbulence anisotropy parameter (Γ), gradient Richardson number (Ri) representing the atmospheric stability and the rate of destruction of temperature variance (ηθ).
Mooring line damping estimation for a floating wind turbine.
Qiao, Dongsheng; Ou, Jinping
2014-01-01
The dynamic responses of mooring line serve important functions in the station keeping of a floating wind turbine (FWT). Mooring line damping significantly influences the global motions of a FWT. This study investigates the estimation of mooring line damping on the basis of the National Renewable Energy Laboratory 5 MW offshore wind turbine model that is mounted on the ITI Energy barge. A numerical estimation method is derived from the energy absorption of a mooring line resulting from FWT motion. The method is validated by performing a 1/80 scale model test. Different parameter changes are analyzed for mooring line damping induced by horizontal and vertical motions. These parameters include excitation amplitude, excitation period, and drag coefficient. Results suggest that mooring line damping must be carefully considered in the FWT design.
Assessment of Reinforced Concrete Surface Breaking Crack Using Rayleigh Wave Measurement.
Lee, Foo Wei; Chai, Hwa Kian; Lim, Kok Sing
2016-03-05
An improved single sided Rayleigh wave (R-wave) measurement was suggested to characterize surface breaking crack in steel reinforced concrete structures. Numerical simulations were performed to clarify the behavior of R-waves interacting with surface breaking crack with different depths and degrees of inclinations. Through analysis of simulation results, correlations between R-wave parameters of interest and crack characteristics (depth and degree of inclination) were obtained, which were then validated by experimental measurement of concrete specimens instigated with vertical and inclined artificial cracks of different depths. Wave parameters including velocity and amplitude attenuation for each case were studied. The correlations allowed us to estimate the depth and inclination of cracks measured experimentally with acceptable discrepancies, particularly for cracks which are relatively shallow and when the crack depth is smaller than the wavelength.
NASA Technical Reports Server (NTRS)
Reed, D. L.; Wallace, R. G.
1981-01-01
The results of system analyses and implementation studies of an advanced location and data collection system (ALDCS) , proposed for inclusion on the National Oceanic Satellite System (NOSS) spacecraft are reported. The system applies Doppler processing and radiofrequency interferometer position location technqiues both alone and in combination. Aspects analyzed include: the constraints imposed by random access to the system by platforms, the RF link parameters, geometric concepts of position and velocity estimation by the two techniques considered, and the effects of electrical measurement errors, spacecraft attitude errors, and geometric parameters on estimation accuracy. Hardware techniques and trade-offs for interferometric phase measurement, ambiguity resolution and calibration are considered. A combined Doppler-interferometer ALDCS intended to fulfill the NOSS data validation and oceanic research support mission is also described.
NASA Technical Reports Server (NTRS)
Kankam, M. David; Rauch, Jeffrey S.; Santiago, Walter
1992-01-01
This paper discusses the effects of variations in system parameters on the dynamic behavior of the Free-Piston Stirling Engine/Linear Alternator (FPSE/LA)-load system. The mathematical formulations incorporate both the mechanical and thermodynamic properties of the FPSE, as well as the electrical equations of the connected load. A state-space technique in the frequency domain is applied to the resulting system of equations to facilitate the evaluation of parametric impacts on the system dynamic stability. Also included is a discussion on the system transient stability as affected by sudden changes in some key operating conditions. Some representative results are correlated with experimental data to verify the model and analytic formulation accuracies. Guidelines are given for ranges of the system parameters which will ensure an overall stable operation.
NASA Technical Reports Server (NTRS)
Kankam, M. D.; Rauch, Jeffrey S.; Santiago, Walter
1992-01-01
This paper discusses the effects of a variations in system parameters on the dynamic behavior of a Free-Piston Stirling Engine/Linear Alternator (FPSE/LA)-load system. The mathematical formulations incorporates both the mechanical and thermodynamic properties of the FPSE, as well as the electrical equations of the connected load. State-space technique in the frequency domain is applied to the resulting system of equations to facilitate the evaluation of parametric impacts on the system dynamic stability. Also included is a discussion on the system transient stability as affected by sudden changes in some key operating conditions. Some representative results are correlated with experimental data to verify the model and analytic formulation accuracies. Guidelines are given for ranges of the system parameters which will ensure an overall stable operation.
NASA Astrophysics Data System (ADS)
Gârlea, Ioana C.; Mulder, Bela M.
2017-12-01
We design a novel microscopic mean-field theory of inhomogeneous nematic liquid crystals formulated entirely in terms of the tensor order parameter field. It combines the virtues of the Landau-de Gennes approach in allowing both the direction and magnitude of the local order to vary, with a self-consistent treatment of the local free-energy valid beyond the small order parameter limit. As a proof of principle, we apply this theory to the well-studied problem of a colloid dispersed in a nematic liquid crystal by including a tunable wall coupling term. For the two-dimensional case, we investigate the organization of the liquid crystal and the position of the point defects as a function of the strength of the coupling constant.
Mooring Line Damping Estimation for a Floating Wind Turbine
Qiao, Dongsheng; Ou, Jinping
2014-01-01
The dynamic responses of mooring line serve important functions in the station keeping of a floating wind turbine (FWT). Mooring line damping significantly influences the global motions of a FWT. This study investigates the estimation of mooring line damping on the basis of the National Renewable Energy Laboratory 5 MW offshore wind turbine model that is mounted on the ITI Energy barge. A numerical estimation method is derived from the energy absorption of a mooring line resulting from FWT motion. The method is validated by performing a 1/80 scale model test. Different parameter changes are analyzed for mooring line damping induced by horizontal and vertical motions. These parameters include excitation amplitude, excitation period, and drag coefficient. Results suggest that mooring line damping must be carefully considered in the FWT design. PMID:25243231
Laboratory analysis of techniques for remote sensing of estuarine parameters using laser excitation
NASA Technical Reports Server (NTRS)
Exton, R. J.; Houghton, W. M.; Esaias, W.; Harriss, R. C.; Farmer, F. H.; White, H. H.
1983-01-01
The theoretical concepts underlying remote sensing of estuarine parameters using laser excitation are examined. The concepts are extended to include Mie scattering as a measure of the total suspended solids and to develop the water Raman signal as an internal standard. Experimental validation of the theory was performed using backscattered laser light from a laboratory tank to simulate a remote-sensing geometry. Artificially prepared sediments and biological cultures were employed to check specific aspects of the theory under controlled conditions. Natural samples gathered from a variety of water types were also analyzed in the tank to further enhance the simulation. The results indicate that it should be possible to remotely quantify total suspended solids, dissolved organics, attenuation coefficient, chlorophyll a, and phycoerythrin in estuarine water using laser excitation.
Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].
DOT National Transportation Integrated Search
2013-10-01
Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...
Chandrasekaran, Sivapragasam; Sankararajan, Vanitha; Neelakandhan, Nampoothiri; Ram Kumar, Mahalakshmi
2017-11-04
This study, through extensive experiments and mathematical modeling, reveals that other than retention time and wastewater temperature (T w ), atmospheric parameters also play important role in the effective functioning of aquatic macrophyte-based treatment system. Duckweed species Lemna minor is considered in this study. It is observed that the combined effect of atmospheric temperature (T atm ), wind speed (U w ), and relative humidity (RH) can be reflected through one parameter, namely the "apparent temperature" (T a ). A total of eight different models are considered based on the combination of input parameters and the best mathematical model is arrived at which is validated through a new experimental set-up outside the modeling period. The validation results are highly encouraging. Genetic programming (GP)-based models are found to reveal deeper understandings of the wetland process.
Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta
2017-01-01
According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.
Wearable vital parameters monitoring system
NASA Astrophysics Data System (ADS)
Caramaliu, Radu Vadim; Vasile, Alexandru; Bacis, Irina
2015-02-01
The system we propose monitors body temperature, heart rate and beside this, it tracks if the person who wears it suffers a faint. It uses a digital temperature sensor, a pulse sensor and a gravitational acceleration sensor to monitor the eventual faint or small heights free falls. The system continuously tracks the GPS position when available and stores the last valid data. So, when measuring abnormal vital parameters the module will send an SMS, using the GSM cellular network , with the person's social security number, the last valid GPS position for that person, the heart rate, the body temperature and, where applicable, a valid fall alert or non-valid fall alert. Even though such systems exist, they contain only faint detection or heart rate detection. Usually there is a strong correlation between low/high heart rate and an eventual faint. Combining both features into one system results in a more reliable detection device.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
A Model-based Approach to Scaling GPP and NPP in Support of MODIS Land Product Validation
NASA Astrophysics Data System (ADS)
Turner, D. P.; Cohen, W. B.; Gower, S. T.; Ritts, W. D.
2003-12-01
Global products from the Earth-orbiting MODIS sensor include land cover, leaf area index (LAI), FPAR, 8-day gross primary production (GPP), and annual net primary production (NPP) at the 1 km spatial resolution. The BigFoot Project was designed specifically to validate MODIS land products, and has initiated ground measurements at 9 sites representing a wide array of vegetation types. An ecosystem process model (Biome-BGC) is used to generate estimates of GPP and NPP for each 5 km x 5 km BigFoot site. Model inputs include land cover and LAI (from Landsat ETM+), daily meteorological data (from a centrally located eddy covariance flux tower), and soil characteristics. Model derived outputs are validated against field-measured NPP and flux tower-derived GPP. The resulting GPP and NPP estimates are then aggregated to the 1 km resolution for direct spatial comparison with corresponding MODIS products. At the high latitude sites (tundra and boreal forest), the MODIS GPP phenology closely tracks the BigFoot GPP, but there is a high bias in the MODIS GPP. In the temperate zone sites, problems with the timing and magnitude of the MODIS FPAR introduce differences in MODIS GPP compared to the validation data at some sites. However, the MODIS LAI/FPAR data are currently being reprocessed (=Collection 4) and new comparisons will be made for 2002. The BigFoot scaling approach permits precise overlap in spatial and temporal resolution between the MODIS products and BigFoot products, and thus permits the evaluation of specific components of the MODIS NPP algorithm. These components include meteorological inputs from the NASA Data Assimilation Office, LAI and FPAR from other MODIS algorithms, and biome-specific parameters for base respiration rate and light use efficiency.
Screen for intracranial dural arteriovenous fistulae with carotid duplex sonography.
Tsai, L-K; Yeh, S-J; Chen, Y-C; Liu, H-M; Jeng, J-S
2009-11-01
Early diagnosis and management of intracranial dural arteriovenous fistulae (DAVF) may prevent the occurrence of stroke. This study aimed to identify the best carotid duplex sonography (CDS) parameters for screening DAVF. 63 DAVF patients and 170 non-DAVF patients received both CDS and conventional angiography. The use of seven CDS haemodynamic parameter sets related to the resistance index (RI) of the external carotid artery (ECA) for the diagnosis of DAVF was validated and the applicability of the best CDS parameter set in 20 400 patients was tested. The CDS parameter set (ECA RI (cut-off point = 0.7) and internal carotid artery (ICA) to ECA RI ratio (cut-off point = 0.9)) had the highest specificity (99%) for diagnosis of DAVF with moderate sensitivity (51%). Location of the DAVF was a significant determinant of sensitivity of detection, which was 70% for non-cavernous DAVF and 0% for cavernous sinus DAVF (p<0.001). The above parameter set detected abnormality in 92 of 20 400 patients. These abnormalities included DAVF (n = 25), carotid stenosis (n = 32), vertebral artery stenosis (n = 7), intracranial arterial stenosis (n = 6), head and neck tumour (n = 3) and unknown aetiology (n = 19). Combined CDS parameters of ECA RI and ICA to ECA RI ratio can be used as a screening tool for the diagnosis of DAVF.
NASA Astrophysics Data System (ADS)
Amanulla, C. H.; Nagendra, N.; Suryanarayana Reddy, M.
2018-03-01
An analysis of this paper is examined, two-dimensional, laminar with heat and mass transfer of natural convective nanofluid flow past a semi-infinite vertical plate surface with velocity and thermal slip effects are studied theoretically. The coupled governing partial differential equations are transformed to ordinary differential equations by using non-similarity transformations. The obtained ordinary differential equations are solved numerically by a well-known method named as Keller Box Method (KBM). The influences of the emerging parameters i.e. Casson fluid parameter (β), Brownian motion parameter (Nb), thermophoresis parameter (Nt), Buoyancy ratio parameter (N), Lewis number (Le), Prandtl number (Pr), Velocity slip factor (Sf) and Thermal slip factor (ST) on velocity, temperature and nano-particle concentration distributions is illustrated graphically and interpreted at length. The major sources of nanoparticle migration in Nanofluids are Thermophoresis and Brownian motion. A suitable agreement with existing published literature is made and an excellent agreement is observed for the limiting case and also validation of solutions with a Nakamura tridiagonal method has been included. It is observed that nanoparticle concentrations on surface decreases with an increase in slip parameter. The study is relevant to enrobing processes for electric-conductive nano-materials, of potential use in aerospace and other industries.
Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David
2013-06-01
We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.
Generalized Grueneisen tensor from solid nonlinearity parameters
NASA Technical Reports Server (NTRS)
Cantrell, J. H., Jr.
1980-01-01
Anharmonic effects in solids are often described in terms of generalized Grueneisen parameters which measure the strain dependence of the lattice vibrational frequencies. The relationship between these parameters and the solid nonlinearity parameters measured directly in ultrasonic harmonic generation experiments is derived using an approach valid for normal-mode elastic wave propagation in any crystalline direction. The resulting generalized Grueneisen parameters are purely isentropic in contrast to the Brugger-Grueneisen parameters which are of a mixed thermodynamic state. Experimental data comparing the isentropic generalized Grueneisen parameters and the Brugger-Grueneisen parameters are presented.
Automated extraction and validation of children's gait parameters with the Kinect.
Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco
2015-12-02
Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.
NASA Astrophysics Data System (ADS)
Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.
2011-12-01
As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Perception of Sexual Orientation from Facial Structure: A Study with Artificial Face Models.
González-Álvarez, Julio
2017-07-01
Research has shown that lay people can perceive sexual orientation better than chance from face stimuli. However, the relation between facial structure and sexual orientation has been scarcely examined. Recently, an extensive morphometric study on a large sample of Canadian people (Skorska, Geniole, Vrysen, McCormick, & Bogaert, 2015) identified three (in men) and four (in women) facial features as unique multivariate predictors of sexual orientation in each sex group. The present study tested the perceptual validity of these facial traits with two experiments based on realistic artificial 3D face models created by manipulating the key parameters and presented to Spanish participants. Experiment 1 included 200 White and Black face models of both sexes. The results showed an overall accuracy (0.74) clearly above chance in a binary hetero/homosexual judgment task and significant differences depending on the race and sex of the face models. Experiment 2 produced five versions of 24 artificial faces of both sexes varying the key parameters in equal steps, and participants had to rate on a 1-7 scale how likely they thought that the depicted person had a homosexual sexual orientation. Rating scores displayed an almost perfect linear regression as a function of the parameter steps. In summary, both experiments demonstrated the perceptual validity of the seven multivariate predictors identified by Skorska et al. and open up new avenues for further research on this issue with artificial face models.
NASA Astrophysics Data System (ADS)
Takeda, M.; Nakajima, H.; Zhang, M.; Hiratsuka, T.
2008-04-01
To obtain reliable diffusion parameters for diffusion testing, multiple experiments should not only be cross-checked but the internal consistency of each experiment should also be verified. In the through- and in-diffusion tests with solution reservoirs, test interpretation of different phases often makes use of simplified analytical solutions. This study explores the feasibility of steady, quasi-steady, equilibrium and transient-state analyses using simplified analytical solutions with respect to (i) valid conditions for each analytical solution, (ii) potential error, and (iii) experimental time. For increased generality, a series of numerical analyses are performed using unified dimensionless parameters and the results are all related to dimensionless reservoir volume (DRV) which includes only the sorptive parameter as an unknown. This means the above factors can be investigated on the basis of the sorption properties of the testing material and/or tracer. The main findings are that steady, quasi-steady and equilibrium-state analyses are applicable when the tracer is not highly sorptive. However, quasi-steady and equilibrium-state analyses become inefficient or impractical compared to steady state analysis when the tracer is non-sorbing and material porosity is significantly low. Systematic and comprehensive reformulation of analytical models enables the comparison of experimental times between different test methods. The applicability and potential error of each test interpretation can also be studied. These can be applied in designing, performing, and interpreting diffusion experiments by deducing DRV from the available information for the target material and tracer, combined with the results of this study.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
NASA Astrophysics Data System (ADS)
Zhu, Hong; Huang, Mai; Sadagopan, Sriram; Yao, Hong
2017-09-01
With increasing vehicle fuel economy standards, automotive OEMs are widely using various AHSS grades including DP, TRIP, CP and 3rd Gen AHSS to reduce vehicle weight due to their good combination of strength and formability. As one of enabling technologies for AHSS application, the requirement for requiring accurate prediction of springback for cold stamped AHSS parts stimulated a large number of investigations in the past decade with reversed loading path at large strains followed by constitutive modeling. With a spectrum of complex loading histories occurring in production stamping processes, there were many challenges in this field including issues of test data reliability, loading path representability, constitutive model robustness and non-unique constitutive parameter-identification. In this paper, various testing approaches and constitutive modeling will be reviewed briefly and a systematic methodology from stress-strain characterization, constitutive model parameter identification for material card generation will be presented in order to support automotive OEM’s need on virtual stamping. This systematic methodology features a tension-compression test at large strain with robust anti-buckling device with concurrent friction force correction, properly selected loading paths to represent material behavior during different springback modes as well as the 10-parameter Yoshida model with knowledge-based parameter-identification through nonlinear optimization. Validation cases for lab AHSS parts will also be discussed to check applicability of this methodology.
Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various circuit parameters including typical manufacturing dimensional tolerances and support rod permittivity. By varying the circuit parameters of an accurate model using MAFIA, these sensitivities can be computed for manufacturing concerns, and design optimization previous to fabrication, thus eliminating the need for costly experimental iterations. Several variations were made to a standard helical circuit using MAFIA to investigate the effect that variations on helical tape and support rod width, metallized loading height and support rod permittivity, have on TWT cold-test characteristics.
Evaluation of MuSyQ land surface albedo based on LAnd surface Parameters VAlidation System (LAPVAS)
NASA Astrophysics Data System (ADS)
Dou, B.; Wen, J.; Xinwen, L.; Zhiming, F.; Wu, S.; Zhang, Y.
2016-12-01
satellite derived Land surface albedo is an essential climate variable which controls the earth energy budget and it can be used in applications such as climate change, hydrology, and numerical weather prediction. However, the accuracy and uncertainty of surface albedo products should be evaluated with a reliable reference truth data prior to applications. A new comprehensive and systemic project of china, called the Remote Sensing Application Network (CRSAN), has been launched recent years. Two subjects of this project is developing a Multi-source data Synergized Quantitative Remote Sensin g Production System ( MuSyQ ) and a Web-based validation system named LAnd surface remote sensing Product VAlidation System (LAPVAS) , which aims to generate a quantitative remote sensing product for ecosystem and environmental monitoring and validate them with a reference validation data and a standard validation system, respectively. Land surface BRDF/albedo is one of product datasets of MuSyQ which has a pentad period with 1km spatial resolution and is derived by Multi-sensor Combined BRDF Inversion ( MCBI ) Model. In this MuSyQ albedo evaluation, a multi-validation strategy is implemented by LAPVAS, including directly and multi-scale validation with field measured albedo and cross validation with MODIS albedo product with different land cover. The results reveal that MuSyQ albedo data with a 5-day temporal resolution is in higher sensibility and accuracy during land cover change period, e.g. snowing. But results without regard to snow or changed land cover, MuSyQ albedo generally is in similar accuracy with MODIS albedo and meet the climate modeling requirement of an absolute accuracy of 0.05.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
NASA Technical Reports Server (NTRS)
Zerlaut, Gene A.; Gilligan, J. E.; Harada, Y.
1965-01-01
In a previous research program for the Jet Propulsion- Laboratory, extensive studies led to the development and specifications of three zinc oxide-pigmented thermal-control coatings. The principal objectives of this program are: improvement of the three paints (as engineering materials), determination of the validity of our accelerated space-simulation testing, and continuation of the zinc oxide photolysis studies begun in the preceding program. Specific tasks that are discussed include: improvement of potassium silicate coatings as engineering materials and elucidation of their storage and handling problems; improvement of methyl silicone coatings as engineering materials; studies of zinc oxide photolysis to establish reasons for the observed stability of zinc oxide; and determination of space-simulation parameters such as long-term stability (to 8000 ESH), effect of coating surface temperature on the rate of degradation, and validity of accelerated testing (by reciprocity and wavelength dependency studies).
Synthetic Jet Flow Field Database for CFD Validation
NASA Technical Reports Server (NTRS)
Yao, Chung-Sheng; Chen, Fang Jenq; Neuhart, Dan; Harris, Jerome
2004-01-01
An oscillatory zero net mass flow jet was generated by a cavity-pumping device, namely a synthetic jet actuator. This basic oscillating jet flow field was selected as the first of the three test cases for the Langley workshop on CFD Validation of Synthetic Jets and Turbulent Separation Control. The purpose of this workshop was to assess the current CFD capabilities to predict unsteady flow fields of synthetic jets and separation control. This paper describes the characteristics and flow field database of a synthetic jet in a quiescent fluid. In this experiment, Particle Image Velocimetry (PIV), Laser Doppler Velocimetry (LDV), and hot-wire anemometry were used to measure the jet velocity field. In addition, the actuator operating parameters including diaphragm displacement, internal cavity pressure, and internal cavity temperature were also documented to provide boundary conditions for CFD modeling.
Bespalova, Nadejda; Morgan, Juliet; Coverdale, John
2016-02-01
Because training residents and faculty to identify human trafficking victims is a major public health priority, the authors review existing assessment tools. PubMed and Google were searched using combinations of search terms including human, trafficking, sex, labor, screening, identification, and tool. Nine screening tools that met the inclusion criteria were found. They varied greatly in length, format, target demographic, supporting resources, and other parameters. Only two tools were designed specifically for healthcare providers. Only one tool was formally assessed to be valid and reliable in a pilot project in trafficking victim service organizations, although it has not been validated in the healthcare setting. This toolbox should facilitate the education of resident physicians and faculty in screening for trafficking victims, assist educators in assessing screening skills, and promote future research on the identification of trafficking victims.
Carr, Brian I.; Giannini, Edoardo G.; Farinati, Fabio; Ciccarese, Francesca; Rapaccini, Gian Ludovico; Marco, Maria Di; Benvegnù, Luisa; Zoli, Marco; Borzio, Franco; Caturelli, Eugenio; Chiaramonte, Maria; Trevisani, Franco
2014-01-01
Background Previous work has shown that 2 general processes contribute to hepatocellular cancer (HCC) prognosis. They are: a. liver damage, monitored by indices such as blood bilirubin, prothrombin time and AST; as well as b. tumor biology, monitored by indices such as tumor size, tumor number, presence of PVT and blood AFP levels. These 2 processes may affect one another, with prognostically significant interactions between multiple tumor and host parameters. These interactions form a context that provide personalization of the prognostic meaning of these factors for every patient. Thus, a given level of bilirubin or tumor diameter might have a different significance in different personal contexts. We previously applied Network Phenotyping Strategy (NPS) to characterize interactions between liver function indices of Asian HCC patients and recognized two clinical phenotypes, S and L, differing in tumor size and tumor nodule numbers. Aims To validate the applicability of the NPS-based HCC S/L classification on an independent European HCC cohort, for which survival information was additionally available. Methods Four sets of peripheral blood parameters, including AFP-platelets, derived from routine blood parameter levels and tumor indices from the ITA.LI.CA database, were analyzed using NPS, a graph-theory based approach, which compares personal patterns of complete relationships between clinical data values to reference patterns with significant association to disease outcomes. Results Without reference to the actual tumor sizes, patients were classified by NPS into 2 subgroups with S and L phenotypes. These two phenotypes were recognized using solely the HCC screening test results, consisting of eight common blood parameters, paired by their significant correlations, including an AFP-Platelets relationship. These trends were combined with patient age, gender and self-reported alcoholism into NPS personal patient profiles. We subsequently validated (using actual scan data) that patients in L phenotype group had 1.5x larger mean tumor masses relative to S, p=6×10−16. Importantly, with the new data, liver test pattern-identified S-phenotype patients had typically 1.7 × longer survival compared to L-phenotype. NPS integrated the liver, tumor and basic demographic factors. Cirrhosis associated thrombocytopenia was typical for smaller S-tumors. In L-tumor phenotype, typical platelet levels increased with the tumor mass. Hepatic inflammation and tumor factors contributed to more aggressive L tumors, with parenchymal destruction and shorter survival. Summary NPS provides integrative interpretation for HCC behavior, identifying two tumor and survival phenotypes by clinical parameter patterns. The NPS classifier is provided as an Excel tool. The NPS system shows the importance of considering each tumor marker and parameter in the total context of all the other parameters of an individual patient. PMID:25023357
On the limits of numerical astronomical solutions used in paleoclimate studies
NASA Astrophysics Data System (ADS)
Zeebe, Richard E.
2017-04-01
Numerical solutions of the equations of the Solar System estimate Earth's orbital parameters in the past and represent the backbone of cyclostratigraphy and astrochronology, now widely applied in geology and paleoclimatology. Given one numerical realization of a Solar System model (i.e., obtained using one code or integrator package), various parameters determine the properties of the solution and usually limit its validity to a certain time period. Such limitations are denoted here as "internal" and include limitations due to (i) the underlying physics/physical model and (ii) numerics. The physics include initial coordinates and velocities of Solar System bodies, treatment of the Moon and asteroids, the Sun's quadrupole moment, and the intrinsic dynamics of the Solar System itself, i.e., its chaotic nature. Numerical issues include solver algorithm, numerical accuracy (e.g., time step), and round-off errors. At present, internal limitations seem to restrict the validity of astronomical solutions to perhaps the past 50 or 60 myr. However, little is currently known about "external" limitations, that is, how do different numerical realizations compare, say, between different investigators using different codes and integrators? Hitherto only two solutions for Earth's eccentricity appear to be used in paleoclimate studies, provided by two different groups that integrated the full Solar System equations over the past >100 myr (Laskar and coworkers and Varadi et al. 2003). In this contribution, I will present results from new Solar System integrations for Earth's eccentricity obtained using the integrator package HNBody (Rauch and Hamilton 2002). I will discuss the various internal limitations listed above within the framework of the present simulations. I will also compare the results to the existing solutions, the details of which are still being sorted out as several simulations are still running at the time of writing.
The MODIS Aerosol Algorithm, Products and Validation
NASA Technical Reports Server (NTRS)
Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Mattoo, S.; Chu, D. A.; Martins, J. V.; Li, R.-R.; Ichoku, C.; Levy, R. C.; Kleidman, R. G.
2003-01-01
The MODerate resolution Imaging Spectroradiometer (MODIS) aboard both NASA's Terra and Aqua satellites is making near global daily observations of the earth in a wide spectral range. These measurements are used to derive spectral aerosol optical thickness and aerosol size parameters over both land and ocean. The aerosol products available over land include aerosol optical thickness at three visible wavelengths, a measure of the fraction of aerosol optical thickness attributed to the fine mode and several derived parameters including reflected spectral solar flux at top of atmosphere. Over ocean, the aerosol optical thickness is provided in seven wavelengths from 0.47 microns to 2.13 microns. In addition, quantitative aerosol size information includes effective radius of the aerosol and quantitative fraction of optical thickness attributed to the fine mode. Spectral aerosol flux, mass concentration and number of cloud condensation nuclei round out the list of available aerosol products over the ocean. The spectral optical thickness and effective radius of the aerosol over the ocean are validated by comparison with two years of AERONET data gleaned from 133 AERONET stations. 8000 MODIS aerosol retrievals colocated with AERONET measurements confirm that one-standard deviation of MODIS optical thickness retrievals fall within the predicted uncertainty of delta tauapproximately equal to plus or minus 0.03 plus or minus 0.05 tau over ocean and delta tay equal to plus or minus 0.05 plus or minus 0.15 tau over land. 271 MODIS aerosol retrievals co-located with AERONET inversions at island and coastal sites suggest that one-standard deviation of MODIS effective radius retrievals falls within delta r_eff approximately equal to 0.11 microns. The accuracy of the MODIS retrievals suggests that the product can be used to help narrow the uncertainties associated with aerosol radiative forcing of global climate.
Gnannt, Ralph; Fischer, Michael A; Baechler, Thomas; Clavien, Pierre-Alain; Karlo, Christoph; Seifert, Burkhardt; Lesurtel, Mickael; Alkadhi, Hatem
2015-01-01
Mortality from abdominal abscesses ranges from 30% in treated cases up to 80% to 100% in patients with undrained or nonoperated abscesses. Various computed tomographic (CT) imaging features have been suggested to indicate infection of postoperative abdominal fluid collections; however, features are nonspecific and substantial overlap between infected and noninfected collections exists. The purpose of this study was to develop and validate a scoring system on the basis of CT imaging findings as well as laboratory and clinical parameters for distinguishing infected from noninfected abdominal fluid collections after surgery. The score developmental cohort included 100 consecutive patients (69 men, 31 women; mean age, 58 ± 17 years) who underwent portal-venous phase CT within 24 hours before CT-guided intervention of postoperative abdominal fluid collections. Imaging features included attenuation (Hounsfield unit [HU]), volume, wall enhancement and thickness, fat stranding, as well as entrapped gas of fluid collections. Laboratory and clinical parameters included diabetes, intake of immunosuppressive drugs, body temperature, C-reactive protein, and leukocyte blood cell count. The score was validated in a separate cohort of 30 consecutive patients (17 men, 13 women; mean age, 51 ± 15 years) with postoperative abdominal fluid collections. Microbiologic analysis from fluid samples served as the standard of reference. Diabetes, body temperature, C-reactive protein, attenuation of the fluid collection (in HUs), wall enhancement and thickness of the wall, adjacent fat stranding, as well as entrapped gas within the fluid collection were significantly different between infected and noninfected collections (P < 0.001). Multiple logistic regression analysis revealed diabetes, C-reactive protein, attenuation of the fluid collection (in HUs), as well as entrapped gas as significant independent predictors of infection (P < 0.001) and thus was selected for constructing a scoring system from 0 to 10 (diabetes: 2 points; C-reactive protein, ≥ 100 mg/L: 1 point; attenuation of fluid collection, ≥ 20 HU: 4 points; entrapped gas: 3 points). The model was well calibrated (Hosmer-Lemeshow test, P = 0.36). In the validation cohort, scores of 2 or lower had a 90% (95% confidence interval [CI], 56%-100%) negative predictive value, scores of 3 or higher had an 80% (95% CI, 56%-94%) positive predictive value, and scores of 6 or higher a 100% (95% CI, 74%-100%) positive predictive value for diagnosing infected fluid collections. Receiver operating characteristic analysis revealed an area under the curve of 0.96 (95% CI, 0.88-1.00) for the score. We introduce an accurate scoring system including quantitative radiologic, laboratory, and clinical parameters for distinguishing infected from noninfected fluid collections after abdominal surgery.
Validation of AIRS/AMSU Cloud Retrievals Using MODIS Cloud Analyses
NASA Technical Reports Server (NTRS)
Molnar, Gyula I.; Susskind, Joel
2005-01-01
The AIRS/AMSU (flying on the EOS-AQUA satellite) sounding retrieval methodology allows for the retrieval of key atmospheric/surface parameters under partially cloudy conditions (Susskind et al.). In addition, cloud parameters are also derived from the AIRS/AMSU observations. Within each AIRS footprint, cloud parameters at up to 2 cloud layers are determined with differing cloud top pressures and effective (product of infrared emissivity at 11 microns and physical cloud fraction) cloud fractions. However, so far the AIRS cloud product has not been rigorously evaluated/validated. Fortunately, collocated/coincident radiances measured by MODIS/AQUA (at a much lower spectral resolution but roughly an order of-magnitude higher spatial resolution than that of AIRS) are used to determine analogous cloud products from MODIS. This allows us for a rather rare and interesting possibility: the intercomparisons and mutual validation of imager vs. sounder-based cloud products obtained from the same satellite positions. First, we present results of small-scale (granules) instantaneous intercomparisons. Next, we will evaluate differences of temporally averaged (monthly) means as well as the representation of inter-annual variability of cloud parameters as presented by the two cloud data sets. In particular, we present statistical differences in the retrieved parameters of cloud fraction and cloud top pressure. We will investigate what type of cloud systems are retrieved most consistently (if any) with both retrieval schemes, and attempt to assess reasons behind statistically significant differences.
Misu, Shogo; Asai, Tsuyoshi; Ono, Rei; Sawa, Ryuichi; Tsutsumimoto, Kota; Ando, Hiroshi; Doi, Takehiko
2017-09-01
The heel is likely a suitable location to which inertial sensors are attached for the detection of gait events. However, there are few studies to detect gait events and determine temporal gait parameters using sensors attached to the heels. We developed two methods to determine temporal gait parameters: detecting heel-contact using acceleration and detecting toe-off using angular velocity data (acceleration-angular velocity method; A-V method), and detecting both heel-contact and toe-off using angular velocity data (angular velocity-angular velocity method; V-V method). The aim of this study was to examine the concurrent validity of the A-V and V-V methods against the standard method, and to compare their accuracy. Temporal gait parameters were measured in 10 younger and 10 older adults. The intra-class correlation coefficients were excellent in both methods compared with the standard method (0.80 to 1.00). The root mean square errors of stance and swing time in the A-V method were smaller than the V-V method in older adults, although there were no significant discrepancies in the other comparisons. Our study suggests that inertial sensors attached to the heels, using the A-V method in particular, provide a valid measurement of temporal gait parameters. Copyright © 2017 Elsevier B.V. All rights reserved.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
NOSD-1000, the high-temperature nitrous oxide spectroscopic databank
NASA Astrophysics Data System (ADS)
Tashkun, S. A.; Perevalov, V. I.; Lavrentieva, N. N.
2016-07-01
We present a high-temperature version, NOSD-1000, of the nitrous oxide spectroscopic databank. The databank contains the line parameters (positions, intensities, air- and self-broadened half-widths and coefficients of temperature dependence of air- and self-broadened half-widths) of the most abundant isotopologue 14N216O of the nitrous oxide molecule. The reference temperature is Tref=1000 K and the intensity cutoff is Icut=10-25 cm-1/(molecule cm-2). More than 1.4 million lines covering the 260-8310 cm-1 spectral range are included in NOSD-1000. The databank has been generated within the framework of the method of effective operators and based on the global fittings of spectroscopic parameters (parameters of the effective Hamiltonian and effective dipole moment operators) to observed data collected from the literature. Line-by-line simulation of a medium-resolution high-temperature (T=873 K) spectrum has been performed in order to validate the databank. NOSD-1000 is freely accessible via the Internet.
Resonant frequency calculations using a hybrid perturbation-Galerkin technique
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1991-01-01
A two-step hybrid perturbation Galerkin technique is applied to the problem of determining the resonant frequencies of one or several degree of freedom nonlinear systems involving a parameter. In one step, the Lindstedt-Poincare method is used to determine perturbation solutions which are formally valid about one or more special values of the parameter (e.g., for large or small values of the parameter). In step two, a subset of the perturbation coordinate functions determined in step one is used in Galerkin type approximation. The technique is illustrated for several one degree of freedom systems, including the Duffing and van der Pol oscillators, as well as for the compound pendulum. For all of the examples considered, it is shown that the frequencies obtained by the hybrid technique using only a few terms from the perturbation solutions are significantly more accurate than the perturbation results on which they are based, and they compare very well with frequencies obtained by purely numerical methods.
Slip-based terrain estimation with a skid-steer vehicle
NASA Astrophysics Data System (ADS)
Reina, Giulio; Galati, Rocco
2016-10-01
In this paper, a novel approach for online terrain characterisation is presented using a skid-steer vehicle. In the context of this research, terrain characterisation refers to the estimation of physical parameters that affects the terrain ability to support vehicular motion. These parameters are inferred from the modelling of the kinematic and dynamic behaviour of a skid-steer vehicle that reveals the underlying relationships governing the vehicle-terrain interaction. The concept of slip track is introduced as a measure of the slippage experienced by the vehicle during turning motion. The proposed terrain estimation system includes common onboard sensors, that is, wheel encoders, electrical current sensors and yaw rate gyroscope. Using these components, the system can characterise terrain online during normal vehicle operations. Experimental results obtained from different surfaces are presented to validate the system in the field showing its effectiveness and potential benefits to implement adaptive driving assistance systems or to automatically update the parameters of onboard control and planning algorithms.