NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.
2007-01-01
Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.
2006-01-01
Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.
Measuring the emergence of tobacco dependence: the contribution of negative reinforcement models.
Eissenberg, Thomas
2004-06-01
This review of negative reinforcement models of drug dependence is part of a series that takes the position that a complete understanding of current concepts of dependence will facilitate the development of reliable and valid measures of the emergence of tobacco dependence. Other reviews within the series consider models that emphasize positive reinforcement and social learning/cognitive models. This review summarizes negative reinforcement in general and then presents four current negative reinforcement models that emphasize withdrawal, classical conditioning, self-medication and opponent-processes. For each model, the paper outlines central aspects of dependence, conceptualization of dependence development and influences that the model might have on current and future measures of dependence. Understanding how drug dependence develops will be an important part of future successful tobacco dependence measurement, prevention and treatment strategies.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
ERIC Educational Resources Information Center
Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju
2014-01-01
The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
An OpenStudio Measure is a script that can manipulate an OpenStudio model and associated data to apply energy conservation measures (ECMs), run supplemental simulations, or visualize simulation results. The OpenStudio software development kit (SDK) and accessibility of the Ruby scripting language makes measure authorship accessible to both software developers and energy modelers. This paper discusses the life cycle of an OpenStudio Measure from development, testing, and distribution, to application.
Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)
ERIC Educational Resources Information Center
Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar
2016-01-01
Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…
NASA Technical Reports Server (NTRS)
Kushner, Laura K.; Drain, Bethany A.; Schairer, Edward T.; Heineck, James T.; Bell, James H.
2017-01-01
Both AoA and MDM measurements can be made using an optical system that relies on photogrammetry. Optical measurements are being requested by customers in wind tunnels with increasing frequency due to their non-intrusive nature and recent hardware and software advances that allow measurements to become near real time. The NASA Ames Research Center Unitary Plan Wind Tunnel is currently developing a system based on photogrammetry to measure model deformation and model angle of attack. This paper describes the new system, its development, its use on recent tests and plans to further develop the system.
Investigation of Models and Estimation Techniques for GPS Attitude Determination
NASA Technical Reports Server (NTRS)
Garrick, J.
1996-01-01
Much work has been done in the Flight Dynamics Analysis Branch (FDAB) in developing algorithms to met the new and growing field of attitude determination using the Global Positioning SYstem (GPS) constellation of satellites. Flight Dynamics has the responsibility to investigate any new technology and incorporate the innovations in the attitude ground support systems developed to support future missions. The work presented here is an investigative analysis that will produce the needed adaptation to allow the Flight Dynamics Support System (FDSS) to incorporate GPS phase measurements and produce observation measurements compatible with the FDSS. A simulator was developed to produce the necessary measurement data to test the models developed for the different estimation techniques used by FDAB. This paper gives an overview of the current modeling capabilities of the simulator models and algorithms for the adaptation of GPS measurement data and results from each of the estimation techniques. Future analysis efforts to evaluate the simulator and models against inflight GPS measurement data are also outlined.
Student-Valued Measurable Teaching Behaviors of Award-Winning Pharmacy Preceptors.
O'Sullivan, Teresa A; Lau, Carmen; Patel, Mitul; Mac, Chi; Krueger, Janelle; Danielson, Jennifer; Weber, Stanley S
2015-12-25
To identify specific preceptor teaching-coaching, role modeling, and facilitating behaviors valued by pharmacy students and to develop measures of those behaviors that can be used for an experiential education quality assurance program. Using a qualitative research approach, we conducted a thematic analysis of student comments about excellent preceptors to identify behaviors exhibited by those preceptors. Identified behaviors were sorted according to the preceptor's role as role model, teacher/coach, or learning facilitator; measurable descriptors for each behavior were then developed. Data analysis resulted in identification of 15 measurable behavior themes, the most frequent being: having an interest in student learning and success, making time for students, and displaying a positive preceptor attitude. Measureable descriptors were developed for 5 role-modeling behaviors, 6 teaching-coaching behaviors, and 4 facilitating behaviors. Preceptors may need to be evaluated in their separate roles as teacher-coach, role model, and learning facilitator. The developed measures in this report could be used in site quality evaluation.
Weck, Philippe F.; Kim, Eunja; Wang, Yifeng; ...
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematicallymore » compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.« less
Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weck, Philippe F.; Kim, Eunja; Wang, Yifeng
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematicallymore » compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.« less
Performance measurement for people with multiple chronic conditions: conceptual model.
Giovannetti, Erin R; Dy, Sydney; Leff, Bruce; Weston, Christine; Adams, Karen; Valuck, Tom B; Pittman, Aisha T; Blaum, Caroline S; McCann, Barbara A; Boyd, Cynthia M
2013-10-01
Improving quality of care for people with multiple chronic conditions (MCCs) requires performance measures reflecting the heterogeneity and scope of their care. Since most existing measures are disease specific, performance measures must be refined and new measures must be developed to address the complexity of care for those with MCCs. To describe development of the Performance Measurement for People with Multiple Chronic Conditions (PM-MCC) conceptual model. Framework development and a national stakeholder panel. We used reviews of existing conceptual frameworks of performance measurement, review of the literature on MCCs, input from experts in the multistakeholder Steering Committee, and public comment. The resulting model centers on the patient and family goals and preferences for care in the context of multiple care sites and providers, the type of care they are receiving, and the national priority domains for healthcare quality measurement. This model organizes measures into a comprehensive framework and identifies areas where measures are lacking. In this context, performance measures can be prioritized and implemented at different levels, in the context of patients' overall healthcare needs.
Latent Transition Analysis with a Mixture Item Response Theory Measurement Model
ERIC Educational Resources Information Center
Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian
2010-01-01
A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…
ERIC Educational Resources Information Center
Wilson, Mark
A psychometric model called Saltus, which represents the qualitative aspects of hierarchical development in a form applicable to additive measurement, was applied. Both Piaget's theory of cognitive development and Gagne's theory of learning hierarchies were used to establish the common features of hierarchical development: (1) gappiness--the…
Global Precipitation Measurement (GPM) Ground Validation (GV) Science Implementation Plan
NASA Technical Reports Server (NTRS)
Petersen, Walter A.; Hou, Arthur Y.
2008-01-01
For pre-launch algorithm development and post-launch product evaluation Global Precipitation Measurement (GPM) Ground Validation (GV) goes beyond direct comparisons of surface rain rates between ground and satellite measurements to provide the means for improving retrieval algorithms and model applications.Three approaches to GPM GV include direct statistical validation (at the surface), precipitation physics validation (in a vertical columns), and integrated science validation (4-dimensional). These three approaches support five themes: core satellite error characterization; constellation satellites validation; development of physical models of snow, cloud water, and mixed phase; development of cloud-resolving model (CRM) and land-surface models to bridge observations and algorithms; and, development of coupled CRM-land surface modeling for basin-scale water budget studies and natural hazard prediction. This presentation describes the implementation of these approaches.
Development and testing of meteorology and air dispersion models for Mexico City
NASA Astrophysics Data System (ADS)
Williams, M. D.; Brown, M. J.; Cruz, X.; Sosa, G.; Streit, G.
Los Alamos National Laboratory and Instituto Mexicano del Petróleo are completing a joint study of options for improving air quality in Mexico City. We have modified a three-dimensional, prognostic, higher-order turbulence model for atmospheric circulation (HOTMAC) and a Monte Carlo dispersion and transport model (RAPTAD) to treat domains that include an urbanized area. We used the meteorological model to drive models which describe the photochemistry and air transport and dispersion. The photochemistry modeling is described in a separate paper. We tested the model against routine measurements and those of a major field program. During the field program, measurements included: (1) lidar measurements of aerosol transport and dispersion, (2) aircraft measurements of winds, turbulence, and chemical species aloft, (3) aircraft measurements of skin temperatures, and (4) Tethersonde measurements of winds and ozone. We modified the meteorological model to include provisions for time-varying synoptic-scale winds, adjustments for local wind effects, and detailed surface-coverage descriptions. We developed a new method to define mixing-layer heights based on model outputs. The meteorology and dispersion models were able to provide reasonable representations of the measurements and to define the sources of some of the major uncertainties in the model-measurement comparisons.
Rapid Model Fabrication and Testing for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
2000-01-01
Advanced methods for rapid fabrication and instrumentation of hypersonic wind tunnel models are being developed and evaluated at NASA Langley Research Center. Rapid aeroheating model fabrication and measurement techniques using investment casting of ceramic test models and thermographic phosphors are reviewed. More accurate model casting techniques for fabrication of benchmark metal and ceramic test models are being developed using a combination of rapid prototype patterns and investment casting. White light optical scanning is used for coordinate measurements to evaluate the fabrication process and verify model accuracy to +/- 0.002 inches. Higher-temperature (<210C) luminescent coatings are also being developed for simultaneous pressure and temperature mapping, providing global pressure as well as global aeroheating measurements. Together these techniques will provide a more rapid and complete experimental aerodynamic and aerothermodynamic database for future aerospace vehicles.
Kerckhoffs, Jules; Hoek, Gerard; Vlaanderen, Jelle; van Nunen, Erik; Messier, Kyle; Brunekreef, Bert; Gulliver, John; Vermeulen, Roel
2017-11-01
Land-use regression (LUR) models for ultrafine particles (UFP) and Black Carbon (BC) in urban areas have been developed using short-term stationary monitoring or mobile platforms in order to capture the high variability of these pollutants. However, little is known about the comparability of predictions of mobile and short-term stationary models and especially the validity of these models for assessing residential exposures and the robustness of model predictions developed in different campaigns. We used an electric car to collect mobile measurements (n = 5236 unique road segments) and short-term stationary measurements (3 × 30min, n = 240) of UFP and BC in three Dutch cities (Amsterdam, Utrecht, Maastricht) in 2014-2015. Predictions of LUR models based on mobile measurements were compared to (i) measured concentrations at the short-term stationary sites, (ii) LUR model predictions based on short-term stationary measurements at 1500 random addresses in the three cities, (iii) externally obtained home outdoor measurements (3 × 24h samples; n = 42) and (iv) predictions of a LUR model developed based upon a 2013 mobile campaign in two cities (Amsterdam, Rotterdam). Despite the poor model R 2 of 15%, the ability of mobile UFP models to predict measurements with longer averaging time increased substantially from 36% for short-term stationary measurements to 57% for home outdoor measurements. In contrast, the mobile BC model only predicted 14% of the variation in the short-term stationary sites and also 14% of the home outdoor sites. Models based upon mobile and short-term stationary monitoring provided fairly high correlated predictions of UFP concentrations at 1500 randomly selected addresses in the three Dutch cities (R 2 = 0.64). We found higher UFP predictions (of about 30%) based on mobile models opposed to short-term model predictions and home outdoor measurements with no clear geospatial patterns. The mobile model for UFP was stable over different settings as the model predicted concentration levels highly correlated to predictions made by a previously developed LUR model with another spatial extent and in a different year at the 1500 random addresses (R 2 = 0.80). In conclusion, mobile monitoring provided robust LUR models for UFP, valid to use in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
HSR Model Deformation Measurements from Subsonic to Supersonic Speeds
NASA Technical Reports Server (NTRS)
Burner, A. W.; Erickson, G. E.; Goodman, W. L.; Fleming, G. A.
1999-01-01
This paper describes the video model deformation technique (VMD) used at five NASA facilities and the projection moire interferometry (PMI) technique used at two NASA facilities. Comparisons between the two techniques for model deformation measurements are provided. Facilities at NASA-Ames and NASA-Langley where deformation measurements have been made are presented. Examples of HSR model deformation measurements from the Langley Unitary Wind Tunnel, Langley 16-foot Transonic Wind Tunnel, and the Ames 12-foot Pressure Tunnel are presented. A study to improve and develop new targeting schemes at the National Transonic Facility is also described. The consideration of milled targets for future HSR models is recommended when deformation measurements are expected to be required. Finally, future development work for VMD and PMI is addressed.
Development of an Intelligent Videogrammetric Wind Tunnel Measurement System
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Burner, Alpheus W.
2004-01-01
A videogrammetric technique developed at NASA Langley Research Center has been used at five NASA facilities at the Langley and Ames Research Centers for deformation measurements on a number of sting mounted and semispan models. These include high-speed research and transport models tested over a wide range of aerodynamic conditions including subsonic, transonic, and supersonic regimes. The technique, based on digital photogrammetry, has been used to measure model attitude, deformation, and sting bending. In addition, the technique has been used to study model injection rate effects and to calibrate and validate methods for predicting static aeroelastic deformations of wind tunnel models. An effort is currently underway to develop an intelligent videogrammetric measurement system that will be both useful and usable in large production wind tunnels while providing accurate data in a robust and timely manner. Designed to encode a higher degree of knowledge through computer vision, the system features advanced pattern recognition techniques to improve automated location and identification of targets placed on the wind tunnel model to be used for aerodynamic measurements such as attitude and deformation. This paper will describe the development and strategy of the new intelligent system that was used in a recent test at a large transonic wind tunnel.
An introduction to the partial credit model for developing nursing assessments.
Fox, C
1999-11-01
The partial credit model, which is a special case of the Rasch measurement model, was presented as a useful way to develop and refine complex nursing assessments. The advantages of the Rasch model over the classical psychometric model were presented including the lack of bias in the measurement process, the ability to highlight those items in need of refinement, the provision of information on congruence between the data and the model, and feedback on the usefulness of the response categories. The partial credit model was introduced as a way to develop complex nursing assessments such as performance-based assessments, because of the model's ability to accommodate a variety of scoring procedures. Finally, an application of the partial credit model was illustrated using the Practical Knowledge Inventory for Nurses, a paper-and-pencil instrument that measures on-the-job decision-making for nurses.
ISS Plasma Interaction: Measurements and Modeling
NASA Technical Reports Server (NTRS)
Barsamian, H.; Mikatarian, R.; Alred, J.; Minow, J.; Koontz, S.
2004-01-01
Ionospheric plasma interaction effects on the International Space Station are discussed in the following paper. The large structure and high voltage arrays of the ISS represent a complex system interacting with LEO plasma. Discharge current measurements made by the Plasma Contactor Units and potential measurements made by the Floating Potential Probe delineate charging and magnetic induction effects on the ISS. Based on theoretical and physical understanding of the interaction phenomena, a model of ISS plasma interaction has been developed. The model includes magnetic induction effects, interaction of the high voltage solar arrays with ionospheric plasma, and accounts for other conductive areas on the ISS. Based on these phenomena, the Plasma Interaction Model has been developed. Limited verification of the model has been performed by comparison of Floating Potential Probe measurement data to simulations. The ISS plasma interaction model will be further tested and verified as measurements from the Floating Potential Measurement Unit become available, and construction of the ISS continues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fort, James A.; Pfund, David M.; Sheen, David M.
2007-04-01
The MFDRC was formed in 1998 to advance the state-of-the-art in simulating multiphase turbulent flows by developing advanced computational models for gas-solid flows that are experimentally validated over a wide range of industrially relevant conditions. The goal was to transfer the resulting validated models to interested US commercial CFD software vendors, who would then propagate the models as part of new code versions to their customers in the US chemical industry. Since the lack of detailed data sets at industrially relevant conditions is the major roadblock to developing and validating multiphase turbulence models, a significant component of the work involvedmore » flow measurements on an industrial-scale riser contributed by Westinghouse, which was subsequently installed at SNL. Model comparisons were performed against these datasets by LANL. A parallel Office of Industrial Technology (OIT) project within the consortium made similar comparisons between riser measurements and models at NETL. Measured flow quantities of interest included volume fraction, velocity, and velocity-fluctuation profiles for both gas and solid phases at various locations in the riser. Some additional techniques were required for these measurements beyond what was currently available. PNNL’s role on the project was to work with the SNL experimental team to develop and test two new measurement techniques, acoustic tomography and millimeter-wave velocimetry. Acoustic tomography is a promising technique for gas-solid flow measurements in risers and PNNL has substantial related experience in this area. PNNL is also active in developing millimeter wave imaging techniques, and this technology presents an additional approach to make desired measurements. PNNL supported the advanced diagnostics development part of this project by evaluating these techniques and then by adapting and developing the selected technology to bulk gas-solids flows and by implementing them for testing in the SNL riser testbed.« less
Bidirectional reflectance modeling of non-homogeneous plant canopies
NASA Technical Reports Server (NTRS)
Norman, John M.
1986-01-01
The objective of this research is to develop a 3-dimensional radiative transfer model for predicting the bidirectional reflectance distribution function (BRDF) for heterogeneous vegetation canopies. Leaf bidirectional reflectance and transmittance distribution functions were measured for corn and soybean leaves. The measurements clearly show that leaves are complex scatterers and considerable specular reflectance is possible. Because of the character of leaf reflectance, true leaf reflectance is larger than the nadir reflectances that are normally used to represent leaves. A 3-dimensional reflectance model, named BIGAR (Bidirectional General Array Model), was developed and compared with measurements from corn and soybean. The model is based on the concept that heterogeneous canopies can be described by a combination of many subcanopies, which contain all the foliage, and these subcanopy envelopes can be characterized by ellipsoids of various sizes and shapes. The model/measurement comparison results indicate that this relatively simple model captures the essential character of row crop BRDF's. Finally, two soil BDRF models were developed: one represents soil particles as rectangular blocks and the other represents soil particles as spheres. The sphere model was found to be superior.
NASA Technical Reports Server (NTRS)
Sharma, M. M.
1979-01-01
An assessment and determination of technology requirements for developing a demonstration model to evaluate feasibility of practical cryogenic liquid level, pressure, and temperature sensors is presented. The construction of a demonstration model to measure characteristics of the selected sensor and to develop test procedures are discussed as well as the development of an appropriate electronic subsystem to operate the sensors.
ERIC Educational Resources Information Center
Chaidi, Thirachai; Damrongpanich, Sunthorapot
2016-01-01
The purposes of this study were to develop a model to measure the belief in Buddhism of junior high school students at Chiang Rai Buddhist Scripture School, and to determine construct validity of the model for measuring the belief in Buddhism by using Multitrait-Multimethod analysis. The samples were 590 junior high school students at Buddhist…
Wong, Sabrina T; Yin, Delu; Bhattacharyya, Onil; Wang, Bin; Liu, Liqun; Chen, Bowen
2010-11-18
China has had no effective and systematic information system to provide guidance for strengthening PHC (Primary Health Care) or account to citizens on progress. We report on the development of the China results-based Logic Model for Community Health Facilities and Stations (CHS) and a set of relevant PHC indicators intended to measure CHS priorities. We adapted the PHC Results Based Logic Model developed in Canada and current work conducted in the community health system in China to create the China CHS Logic Model framework. We used a staged approach by first constructing the framework and indicators and then validating their content through an interactive process involving policy analysis, critical review of relevant literature and multiple stakeholder consultation. The China CHS Logic Model includes inputs, activities, outputs and outcomes with a total of 287 detailed performance indicators. In these indicators, 31 indicators measure inputs, 64 measure activities, 105 measure outputs, and 87 measure immediate (n = 65), intermediate (n = 15), or final (n = 7) outcomes. A Logic Model framework can be useful in planning, implementation, analysis and evaluation of PHC at a system and service level. The development and content validation of the China CHS Logic Model and subsequent indicators provides a means for stronger accountability and a clearer sense of overall direction and purpose needed to renew and strengthen the PHC system in China. Moreover, this work will be useful in moving towards developing a PHC information system and performance measurement across districts in urban China, and guiding the pursuit of quality in PHC.
2010-01-01
Background China has had no effective and systematic information system to provide guidance for strengthening PHC (Primary Health Care) or account to citizens on progress. We report on the development of the China results-based Logic Model for Community Health Facilities and Stations (CHS) and a set of relevant PHC indicators intended to measure CHS priorities. Methods We adapted the PHC Results Based Logic Model developed in Canada and current work conducted in the community health system in China to create the China CHS Logic Model framework. We used a staged approach by first constructing the framework and indicators and then validating their content through an interactive process involving policy analysis, critical review of relevant literature and multiple stakeholder consultation. Results The China CHS Logic Model includes inputs, activities, outputs and outcomes with a total of 287 detailed performance indicators. In these indicators, 31 indicators measure inputs, 64 measure activities, 105 measure outputs, and 87 measure immediate (n = 65), intermediate (n = 15), or final (n = 7) outcomes. Conclusion A Logic Model framework can be useful in planning, implementation, analysis and evaluation of PHC at a system and service level. The development and content validation of the China CHS Logic Model and subsequent indicators provides a means for stronger accountability and a clearer sense of overall direction and purpose needed to renew and strengthen the PHC system in China. Moreover, this work will be useful in moving towards developing a PHC information system and performance measurement across districts in urban China, and guiding the pursuit of quality in PHC. PMID:21087516
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D
2018-05-18
Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Fulmer, Gavin W.; Liang, Ling L.
2013-02-01
This study tested a student survey to detect differences in instruction between teachers in a modeling-based science program and comparison group teachers. The Instructional Activities Survey measured teachers' frequency of modeling, inquiry, and lecture instruction. Factor analysis and Rasch modeling identified three subscales, Modeling and Reflecting, Communicating and Relating, and Investigative Inquiry. As predicted, treatment group teachers engaged in modeling and inquiry instruction more than comparison teachers, with effect sizes between 0.55 and 1.25. This study demonstrates the utility of student report data in measuring teachers' classroom practices and in evaluating outcomes of a professional development program.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1977-01-01
Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.
ERIC Educational Resources Information Center
Peterson, Christina Hamme; Gischlar, Karen L.; Peterson, N. Andrew
2017-01-01
Measures that accurately capture the phenomenon are critical to research and practice in group work. The vast majority of group-related measures were developed using the reflective measurement model rooted in classical test theory (CTT). Depending on the construct definition and the measure's purpose, the reflective model may not always be the…
2010-01-01
Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children. Methods We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system. Results We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development. Conclusions The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals. PMID:20181129
Development and fabrication of the Virginia skid-resistance measurement vehicle (model 2).
DOT National Transportation Integrated Search
1970-01-01
The inefficiency of the Virginia Highway Research Council, Model 1, skid measurement trailer, and the increasing effort expended by the American Society for Testing and Materials toward the development of more stringent specifications for pavement sk...
ERIC Educational Resources Information Center
Lehrer, Richard; Kim, Min-joung; Schauble, Leona
2007-01-01
New capabilities in "TinkerPlots 2.0" supported the conceptual development of fifth- and sixth-grade students as they pursued several weeks of instruction that emphasized data modeling. The instruction highlighted links between data analysis, chance, and modeling in the context of describing and explaining the distributions of measures that result…
Magnetometer-augmented IMU simulator: in-depth elaboration.
Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel
2015-03-04
The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests.
Magnetometer-Augmented IMU Simulator: In-Depth Elaboration
Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel
2015-01-01
The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests. PMID:25746095
Glass microneedles for force measurements: a finite-element analysis model
Ayittey, Peter N.; Walker, John S.; Rice, Jeremy J.; de Tombe, Pieter P.
2010-01-01
Changes in developed force (0.1–3.0 μN) observed during contraction of single myofibrils in response to rapidly changing calcium concentrations can be measured using glass microneedles. These microneedles are calibrated for stiffness and deflect on response to developed myofibril force. The precision and accuracy of kinetic measurements are highly dependent on the structural and mechanical characteristics of the microneedles, which are generally assumed to have a linear force–deflection relationship. We present a finite-element analysis (FEA) model used to simulate the effects of measurable geometry on stiffness as a function of applied force and validate our model with actual measured needle properties. In addition, we developed a simple heuristic constitutive equation that best describes the stiffness of our range of microneedles used and define limits of geometry parameters within which our predictions hold true. Our model also maps a relation between the geometry parameters and natural frequencies in air, enabling optimum parametric combinations for microneedle fabrication that would reflect more reliable force measurement in fluids and physiological environments. We propose a use for this model to aid in the design of microneedles to improve calibration time, reproducibility, and precision for measuring myofibrillar, cellular, and supramolecular kinetic forces. PMID:19104827
Sharing behavioral data through a grid infrastructure using data standards
Min, Hua; Ohira, Riki; Collins, Michael A; Bondy, Jessica; Avis, Nancy E; Tchuvatkina, Olga; Courtney, Paul K; Moser, Richard P; Shaikh, Abdul R; Hesse, Bradford W; Cooper, Mary; Reeves, Dianne; Lanese, Bob; Helba, Cindy; Miller, Suzanne M; Ross, Eric A
2014-01-01
Objective In an effort to standardize behavioral measures and their data representation, the present study develops a methodology for incorporating measures found in the National Cancer Institute's (NCI) grid-enabled measures (GEM) portal, a repository for behavioral and social measures, into the cancer data standards registry and repository (caDSR). Methods The methodology consists of four parts for curating GEM measures into the caDSR: (1) develop unified modeling language (UML) models for behavioral measures; (2) create common data elements (CDE) for UML components; (3) bind CDE with concepts from the NCI thesaurus; and (4) register CDE in the caDSR. Results UML models have been developed for four GEM measures, which have been registered in the caDSR as CDE. New behavioral concepts related to these measures have been created and incorporated into the NCI thesaurus. Best practices for representing measures using UML models have been utilized in the practice (eg, caDSR). One dataset based on a GEM-curated measure is available for use by other systems and users connected to the grid. Conclusions Behavioral and population science data can be standardized by using and extending current standards. A new branch of CDE for behavioral science was developed for the caDSR. It expands the caDSR domain coverage beyond the clinical and biological areas. In addition, missing terms and concepts specific to the behavioral measures addressed in this paper were added to the NCI thesaurus. A methodology was developed and refined for curation of behavioral and population science data. PMID:24076749
NASA Astrophysics Data System (ADS)
Jones, S. I.; Uritsky, V. M.; Davila, J. M.
2017-12-01
In absence of reliable coronal magnetic field measurements, solar physicists have worked for several decades to develop techniques for extrapolating photospheric magnetic field measurements into the solar corona and/or heliosphere. The products of these efforts tend to be very sensitive to variation in the photospheric measurements, such that the uncertainty in the photospheric measurements introduces significant uncertainty into the coronal and heliospheric models needed to predict such things as solar wind speed, IMF polarity at Earth, and CME propagation. Ultimately, the reason for the sensitivity of the model to the boundary conditions is that the model is trying to extact a great deal of information from a relatively small amout of data. We have published in recent years about a new method we are developing to use morphological information gleaned from coronagraph images to constrain models of the global coronal magnetic field. In our approach, we treat the photospheric measurements as approximations and use an optimization algorithm to iteratively find a global coronal model that best matches both the photospheric measurements and quasi-linear features observed in polarization brightness coronagraph images. Here we will summarize the approach we have developed and present recent progress in optimizing PFSS models based on GONG magnetograms and MLSO K-Cor images.
Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C
2013-12-21
Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, Amir; Goldwasser, David; Parker, Andrew
The OpenStudio software development kit has played a significant role in the adoption of the EnergyPlus whole building energy modeling engine and in the development and launch of new applications that use EnergyPlus for a variety of purposes, from design to auditing to code compliance and management of large portfolios. One of the most powerful features of the OpenStudio platform is Measure, a scripting facility similar to Excel's Visual Basic macros. Measures can be used to apply energy conservation measures to models--hence the name--to create reports and visualizations, and even to sew together custom workflows. Measures automate tedious tasks increasingmore » modeler productivity and reducing error. Measures have also become a currency in the OpenStudio tools ecosystem, a way to codify knowledge and protocol and transfer it from one modeler to another, either within an organization or within the global modeling community. This paper describes some of the many applications of Measures.« less
Spatial Modeling for Resources Framework (SMRF)
USDA-ARS?s Scientific Manuscript database
Spatial Modeling for Resources Framework (SMRF) was developed by Dr. Scott Havens at the USDA Agricultural Research Service (ARS) in Boise, ID. SMRF was designed to increase the flexibility of taking measured weather data and distributing the point measurements across a watershed. SMRF was developed...
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
ERIC Educational Resources Information Center
Porfeli, Erik J.; Richard, George V.; Savickas, Mark L.
2010-01-01
An empirical measurement model for interest inventory construction uses internal criteria whereas an inductive measurement model uses external criteria. The empirical and inductive measurement models are compared and contrasted and then two models are assessed through tests of the effectiveness and economy of scales for the Medical Specialty…
The link provided access to all the datasets and metadata used in this manuscript for the model development and evaluation per Geoscientific Model Development's publication guidelines with the exception of the model output due to its size. This dataset is associated with the following publication:Bash , J., K. Baker , and M. Beaver. Evaluation of improved land use and canopy representation in BEIS v3.61 with biogenic VOC measurements in California. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 9: 2191-2207, (2016).
Hendriks, Jacqueline; Fyfe, Sue; Styles, Irene; Skinner, S Rachel; Merriman, Gareth
2012-01-01
Measurement scales seeking to quantify latent traits like attitudes, are often developed using traditional psychometric approaches. Application of the Rasch unidimensional measurement model may complement or replace these techniques, as the model can be used to construct scales and check their psychometric properties. If data fit the model, then a scale with invariant measurement properties, including interval-level scores, will have been developed. This paper highlights the unique properties of the Rasch model. Items developed to measure adolescent attitudes towards abortion are used to exemplify the process. Ten attitude and intention items relating to abortion were answered by 406 adolescents aged 12 to 19 years, as part of the "Teen Relationships Study". The sampling framework captured a range of sexual and pregnancy experiences. Items were assessed for fit to the Rasch model including checks for Differential Item Functioning (DIF) by gender, sexual experience or pregnancy experience. Rasch analysis of the original dataset initially demonstrated that some items did not fit the model. Rescoring of one item (B5) and removal of another (L31) resulted in fit, as shown by a non-significant item-trait interaction total chi-square and a mean log residual fit statistic for items of -0.05 (SD=1.43). No DIF existed for the revised scale. However, items did not distinguish as well amongst persons with the most intense attitudes as they did for other persons. A person separation index of 0.82 indicated good reliability. Application of the Rasch model produced a valid and reliable scale measuring adolescent attitudes towards abortion, with stable measurement properties. The Rasch process provided an extensive range of diagnostic information concerning item and person fit, enabling changes to be made to scale items. This example shows the value of the Rasch model in developing scales for both social science and health disciplines.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2013-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
NASA Standard for Models and Simulations: Philosophy and Requirements Overview
NASA Technical Reports Server (NTRS)
Blattnig, St3eve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.
2009-01-01
Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason Erik
A frequency-dependent model for levels and decay rates of reverberant energy in systems of coupled rooms is developed and compared with measurements conducted in a 1:10 scale model and in Bass Hall, Fort Worth, TX. Schroeder frequencies of subrooms, fSch, characteristic size of coupling apertures, a, relative to wavelength lambda, and characteristic size of room surfaces, l, relative to lambda define the frequency regions. At high frequencies [HF (f >> f Sch, a >> lambda, l >> lambda)], this work improves upon prior statistical-acoustics (SA) coupled-ODE models by incorporating geometrical-acoustics (GA) corrections for the model of decay within subrooms and the model of energy transfer between subrooms. Previous researchers developed prediction algorithms based on computational GA. Comparisons of predictions derived from beam-axis tracing with scale-model measurements indicate that systematic errors for coupled rooms result from earlier tail-correction procedures that assume constant quadratic growth of reflection density. A new algorithm is developed that uses ray tracing rather than tail correction in the late part and is shown to correct this error. At midfrequencies [MF (f >> f Sch, a ˜ lambda)], HF models are modified to account for wave effects at coupling apertures by including analytically or heuristically derived power transmission coefficients tau. This work improves upon prior SA models of this type by developing more accurate estimates of random-incidence tau. While the accuracy of the MF models is difficult to verify, scale-model measurements evidence the expected behavior. The Biot-Tolstoy-Medwin-Svensson (BTMS) time-domain edge-diffraction model is newly adapted to study transmission through apertures. Multiple-order BTMS scattering is theoretically and experimentally shown to be inaccurate due to the neglect of slope diffraction. At low frequencies (f ˜ f Sch), scale-model measurements have been qualitatively explained by application of previously developed perturbation models. Measurements newly confirm that coupling strength between three-dimensional rooms is related to unperturbed pressure distribution on the coupling surface. In Bass Hall, measurements are conducted to determine the acoustical effects of the coupled stage house on stage and in the audience area. The high-frequency predictions of statistical- and geometrical-acoustics models agree well with measured results. Predictions of the transmission coefficients of the coupling apertures agree, at least qualitatively, with the observed behavior.
NASA Technical Reports Server (NTRS)
Loeb, Norman G.
2004-01-01
Report consists of: 1. List of accomplishments 2. List of publications 3. Abstracts of published or submitted papers and 4. Subject invention disclosure. The accomplishments of the grant listed are: 1. Improved the third-order turbulence closure in cloud resolving models to remove the liquid water oscillation. 2. Used the University of California-Los Angeles (UCLA) large-eddy simulation (LES) model to provide data for radiation transfer testing. 3. Revised shortwave k-distribution models based on HITRAN 2000. 4. Developed a gamma-weighted two-stream radiative transfer model for radiation budget estimate applications. 5. Estimated the effect of spherical geometry to the earth radiation budget. 6. Estimated top-of-atmosphere irradiance over snow and sea ice surfaces. 7. Estimated the aerosol direct radiative effect at the top of the atmosphere. 8. Estimated the top-of-atmosphere reflectance of the clear-sky molecular atmosphere over ocean. 9. Developed and validated new set of Angular Distribution Models for the CERES TRMM satellite instrument (tropical) 10. Developed and validated new set of Angular Distribution Models for the CERES Terra satellite instrument (global) 11. Quantified the top-of-atmosphere direct radiative effect of aerosols over global oceans from merged CERES and MODIS observations 12 Clarified the definition of TOA flux reference level for radiation budget studies 13. Developed new algorithm for unfaltering CERES measured radiances 14. Used multiangle POLDER measurements to produce narrowband angular distribution models and examine the effect of scene identification errors on TOA albedo estimates 15. Developed and validated a novel algorithm called the Multidirectional Reflectance Matching (MRM) model for inferring TOA albedos from ice clouds using multi-angle satellite measurements. 16. Developed and validated a novel algorithm called the Multidirectional Polarized Reflectance Matching (MPRM) model for inferring particle shapes from ice clouds using multi-angle polarized satellite measurements. 17. Developed 4 advanced light scattering models including the three-dimensional (3D) uniaxial perfectly matched layer (UPML) finite-difference time-domain (FDTD) model. 18. Develop sunglint in situ measurement and study reflectance distribution in the sunglint area. 19. Lead a balloon-borne radiometer TOA albedo validation effort. 20. Developed a CERES surface UVB, UVA, and UV index product.
Third molar development: measurements versus scores as age predictor.
Thevissen, P W; Fieuws, S; Willems, G
2011-10-01
Human third molar development is widely used to predict chronological age of sub adult individuals with unknown or doubted age. For these predictions, classically, the radiologically observed third molar growth and maturation is registered using a staging and related scoring technique. Measures of lengths and widths of the developing wisdom tooth and its adjacent second molar can be considered as an alternative registration. The aim of this study was to verify relations between mandibular third molar developmental stages or measurements of mandibular second molar and third molars and age. Age related performance of stages and measurements were compared to assess if measurements added information to age predictions from third molar formation stage. The sample was 340 orthopantomograms (170 females, 170 males) of individuals homogenously distributed in age between 7 and 24 years. Mandibular lower right, third and second molars, were staged following Gleiser and Hunt, length and width measurements were registered, and various ratios of these measurements were calculated. Univariable regression models with age as response and third molar stage, measurements and ratios of second and third molars as predictors, were considered. Multivariable regression models assessed if measurements or ratios added information to age prediction from third molar stage. Coefficients of determination (R(2)) and root mean squared errors (RMSE) obtained from all regression models were compared. The univariable regression model using stages as predictor yielded most accurate age predictions (males: R(2) 0.85, RMSE between 0.85 and 1.22 year; females: R(2) 0.77, RMSE between 1.19 and 2.11 year) compared to all models including measurements and ratios. The multivariable regression models indicated that measurements and ratios added no clinical relevant information to the age prediction from third molar stage. Ratios and measurements of second and third molars are less accurate age predictors than stages of developing third molars. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A
2017-01-01
Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models.
Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A
2017-01-01
Purpose Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Methods Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. Results All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. Conclusion The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models. PMID:29355212
NASA Technical Reports Server (NTRS)
Johnson, R. W.
1974-01-01
A mathematical model of an ecosystem is developed. Secondary productivity is evaluated in terms of man related and controllable factors. Information from an existing physical parameters model is used as well as pertinent biological measurements. Predictive information of value to estuarine management is presented. Biological, chemical, and physical parameters measured in order to develop models of ecosystems are identified.
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Kelley, Gary W.
2012-01-01
The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Computerized Adaptive Assessment of Personality Disorder: Introducing the CAT-PD Project
Simms, Leonard J.; Goldberg, Lewis R.; Roberts, John E.; Watson, David; Welte, John; Rotterman, Jane H.
2011-01-01
Assessment of personality disorders (PD) has been hindered by reliance on the problematic categorical model embodied in the most recent Diagnostic and Statistical Model of Mental Disorders (DSM), lack of consensus among alternative dimensional models, and inefficient measurement methods. This article describes the rationale for and early results from an NIMH-funded, multi-year study designed to develop an integrative and comprehensive model and efficient measure of PD trait dimensions. To accomplish these goals, we are in the midst of a five-phase project to develop and validate the model and measure. The results of Phase 1 of the project—which was focused on developing the PD traits to be assessed and the initial item pool—resulted in a candidate list of 59 PD traits and an initial item pool of 2,589 items. Data collection and structural analyses in community and patient samples will inform the ultimate structure of the measure, and computerized adaptive testing (CAT) will permit efficient measurement of the resultant traits. The resultant Computerized Adaptive Test of Personality Disorder (CAT-PD) will be well positioned as a measure of the proposed DSM-5 PD traits. Implications for both applied and basic personality research are discussed. PMID:22804677
NASA Technical Reports Server (NTRS)
Doty, Keith L
1992-01-01
The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.
NASA Technical Reports Server (NTRS)
van der Meulen, M. C.; Marcus, R.; Bachrach, L. K.; Carter, D. R.
1997-01-01
We have developed an analytical model of long bone cross-sectional ontogeny in which appositional growth of the diaphysis is primarily driven by mechanical stimuli associated with increasing body mass during growth and development. In this study, our goal was to compare theoretical predictions of femoral diaphyseal structure from this model with measurements of femoral bone mineral and geometry by dual energy x-ray absorptiometry. Measurements of mid-diaphyseal femoral geometry and structure were made previously in 101 Caucasian adolescents and young adults 9-26 years of age. The data on measured bone mineral content and calculated section modulus were compared with the results of our analytical model of cross-sectional development of the human femur over the same age range. Both bone mineral content and section modulus showed good correspondence with experimental measurements when the relationships with age and body mass were examined. Strong linear relationships were evident for both parameters when examined as a function of body mass.
Validation of a FAST model of the Statoil-Hywind Demo floating wind turbine
Driscoll, Frederick; Jonkman, Jason; Robertson, Amy; ...
2016-10-13
To assess the accuracy of the National Renewable Energy Laboratory's (NREL's) FAST simulation tool for modeling the coupled response of floating offshore wind turbines under realistic open-ocean conditions, NREL developed a FAST model of the Statoil Hywind Demo floating offshore wind turbine, and validated simulation results against field measurements. Field data were provided by Statoil, which conducted a comprehensive test measurement campaign of its demonstration system, a 2.3-MW Siemens turbine mounted on a spar substructure deployed about 10 km off the island of Karmoy in Norway. A top-down approach was used to develop the FAST model, starting with modeling themore » blades and working down to the mooring system. Design data provided by Siemens and Statoil were used to specify the structural, aerodynamic, and dynamic properties. Measured wind speeds and wave spectra were used to develop the wind and wave conditions used in the model. The overall system performance and behavior were validated for eight sets of field measurements that span a wide range of operating conditions. The simulated controller response accurately reproduced the measured blade pitch and power. In conclusion, the structural and blade loads and spectra of platform motion agree well with the measured data.« less
Sharing behavioral data through a grid infrastructure using data standards.
Min, Hua; Ohira, Riki; Collins, Michael A; Bondy, Jessica; Avis, Nancy E; Tchuvatkina, Olga; Courtney, Paul K; Moser, Richard P; Shaikh, Abdul R; Hesse, Bradford W; Cooper, Mary; Reeves, Dianne; Lanese, Bob; Helba, Cindy; Miller, Suzanne M; Ross, Eric A
2014-01-01
In an effort to standardize behavioral measures and their data representation, the present study develops a methodology for incorporating measures found in the National Cancer Institute's (NCI) grid-enabled measures (GEM) portal, a repository for behavioral and social measures, into the cancer data standards registry and repository (caDSR). The methodology consists of four parts for curating GEM measures into the caDSR: (1) develop unified modeling language (UML) models for behavioral measures; (2) create common data elements (CDE) for UML components; (3) bind CDE with concepts from the NCI thesaurus; and (4) register CDE in the caDSR. UML models have been developed for four GEM measures, which have been registered in the caDSR as CDE. New behavioral concepts related to these measures have been created and incorporated into the NCI thesaurus. Best practices for representing measures using UML models have been utilized in the practice (eg, caDSR). One dataset based on a GEM-curated measure is available for use by other systems and users connected to the grid. Behavioral and population science data can be standardized by using and extending current standards. A new branch of CDE for behavioral science was developed for the caDSR. It expands the caDSR domain coverage beyond the clinical and biological areas. In addition, missing terms and concepts specific to the behavioral measures addressed in this paper were added to the NCI thesaurus. A methodology was developed and refined for curation of behavioral and population science data. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Model based design introduction: modeling game controllers to microprocessor architectures
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
Development of a new global radiation belt model
NASA Astrophysics Data System (ADS)
Sicard, Angelica; Boscher, Daniel; Bourdarie, Sébastien; Lazaro, Didier; Maget, Vincent; Ecoffet, Robert; Rolland, Guy; Standarovski, Denis
2017-04-01
The well known AP8 and AE8 NASA models are commonly used in the industry to specify the radiation belt environment. Unfortunately, there are some limitations in the use of these models, first due to the covered energy range, but also because in some regions of space, there are discrepancies between the predicted average values and the measurements. Therefore, our aim is to develop a radiation belt model, covering a large region of space and energy, from LEO altitudes to GEO and above, and from plasma to relativistic particles. The aim for the first version of this new model is to correct the AP8 and AE8 models where they are deficient or not defined. At geostationary, we developed ten years ago for electrons the IGE-2006 model which was proven to be more accurate than AE8, and used commonly in the industry, covering a broad energy range, from 1keV to 5MeV. From then, a proton model for geostationary orbit was also developed for material applications, followed by the OZONE model covering a narrower energy range but the whole outer electron belt, a SLOT model to asses average electron values for 2
NASA Technical Reports Server (NTRS)
Bihrle, W., Jr.
1976-01-01
A correlation study was conducted to determine the ability of current analytical spin prediction techniques to predict the flight motions of a current fighter airplane configuration during the spin entry, the developed spin, and the spin recovery motions. The airplane math model used aerodynamics measured on an exact replica of the flight test model using conventional static and forced-oscillation wind-tunnel test techniques and a recently developed rotation-balance test apparatus capable of measuring aerodynamics under steady spinning conditions. An attempt was made to predict the flight motions measured during stall/spin flight testing of an unpowered, radio-controlled model designed to be a 1/10 scale, dynamically-scaled model of a current fighter configuration. Comparison of the predicted and measured flight motions show that while the post-stall and spin entry motions were not well-predicted, the developed spinning motion (a steady flat spin) and the initial phases of the spin recovery motion are reasonably well predicted.
Abramoff, Rose; Xu, Xiaofeng; Hartman, Melannie; ...
2017-12-20
Soil organic carbon (SOC) can be defined by measurable chemical and physical pools, such as mineral-associated carbon, carbon physically entrapped in aggregates, dissolved carbon, and fragments of plant detritus. Yet, most soil models use conceptual rather than measurable SOC pools. What would the traditional pool-based soil model look like if it were built today, reflecting the latest understanding of biological, chemical, and physical transformations in soils? We propose a conceptual model—the Millennial model—that defines pools as measurable entities. First, we discuss relevant pool definitions conceptually and in terms of the measurements that can be used to quantify pool size, formation,more » and destabilization. Then, we develop a numerical model following the Millennial model conceptual framework to evaluate against the Century model, a widely-used standard for estimating SOC stocks across space and through time. The Millennial model predicts qualitatively similar changes in total SOC in response to single factor perturbations when compared to Century, but different responses to multiple factor perturbations. Finally, we review important conceptual and behavioral differences between the Millennial and Century modeling approaches, and the field and lab measurements needed to constrain parameter values. Here, we propose the Millennial model as a simple but comprehensive framework to model SOC pools and guide measurements for further model development.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramoff, Rose; Xu, Xiaofeng; Hartman, Melannie
Soil organic carbon (SOC) can be defined by measurable chemical and physical pools, such as mineral-associated carbon, carbon physically entrapped in aggregates, dissolved carbon, and fragments of plant detritus. Yet, most soil models use conceptual rather than measurable SOC pools. What would the traditional pool-based soil model look like if it were built today, reflecting the latest understanding of biological, chemical, and physical transformations in soils? We propose a conceptual model—the Millennial model—that defines pools as measurable entities. First, we discuss relevant pool definitions conceptually and in terms of the measurements that can be used to quantify pool size, formation,more » and destabilization. Then, we develop a numerical model following the Millennial model conceptual framework to evaluate against the Century model, a widely-used standard for estimating SOC stocks across space and through time. The Millennial model predicts qualitatively similar changes in total SOC in response to single factor perturbations when compared to Century, but different responses to multiple factor perturbations. Finally, we review important conceptual and behavioral differences between the Millennial and Century modeling approaches, and the field and lab measurements needed to constrain parameter values. Here, we propose the Millennial model as a simple but comprehensive framework to model SOC pools and guide measurements for further model development.« less
Current status: Animal models of nausea
NASA Technical Reports Server (NTRS)
Fox, Robert A.
1991-01-01
The advantages, and possible benefits of a valid, reliable animal model for nausea are discussed, and difficulties inherent to the development of a model are considered. A principle problem for developing models arises because nausea is a subjective sensation that can be identified only in humans. Several putative measures of nausea in animals are considered, with more detailed consideration directed to variation in cardiac rate, levels of vasopressin, and conditioned taste aversion. Demonstration that putative measures are associated with reported nausea in humans is proposed as a requirement for validating measures to be used in animal models. The necessity for a 'real-time' measure of nausea is proposed as an important factor for future research; and the need for improved understanding of the neuroanatomy underlying the emetic syndrome is discussed.
Investigation of remote sensing techniques of measuring soil moisture
NASA Technical Reports Server (NTRS)
Newton, R. W. (Principal Investigator); Blanchard, A. J.; Nieber, J. L.; Lascano, R.; Tsang, L.; Vanbavel, C. H. M.
1981-01-01
Major activities described include development and evaluation of theoretical models that describe both active and passive microwave sensing of soil moisture, the evaluation of these models for their applicability, the execution of a controlled field experiment during which passive microwave measurements were acquired to validate these models, and evaluation of previously acquired aircraft microwave measurements. The development of a root zone soil water and soil temperature profile model and the calibration and evaluation of gamma ray attenuation probes for measuring soil moisture profiles are considered. The analysis of spatial variability of soil information as related to remote sensing is discussed as well as the implementation of an instrumented field site for acquisition of soil moisture and meteorologic information for use in validating the soil water profile and soil temperature profile models.
NASA Technical Reports Server (NTRS)
Veziroglu, T. N.; Lee, S. S.
1973-01-01
A feasibility study for the development of a three-dimensional generalized, predictive, analytical model involving remote sensing, in-situ measurements, and an active system to remotely measure turbidity is presented. An implementation plan for the development of the three-dimensional model and for the application of remote sensing of temperature and turbidity measurements is outlined.
The Development and Validation of an End-User Satisfaction Measure in a Student Laptop Environment
ERIC Educational Resources Information Center
Kim, Sung; Meng, Juan; Kalinowski, Jon; Shin, Dooyoung
2014-01-01
The purpose of this paper is to present the development and validation of a measurement model for student user satisfaction in a laptop environment. Using a "quasi Delphi" method in addition to contributions from prior research we used EFA and CFA (LISREL) to identify a five factor (14 item) measurement model that best fit the data. The…
A measurement model of multiple intelligence profiles of management graduates
NASA Astrophysics Data System (ADS)
Krishnan, Heamalatha; Awang, Siti Rahmah
2017-05-01
In this study, developing a fit measurement model and identifying the best fitting items to represent Howard Gardner's nine intelligences namely, musical intelligence, bodily-kinaesthetic intelligence, mathematical/logical intelligence, visual/spatial intelligence, verbal/linguistic intelligence, interpersonal intelligence, intrapersonal intelligence, naturalist intelligence and spiritual intelligence are the main interest in order to enhance the opportunities of the management graduates for employability. In order to develop a fit measurement model, Structural Equation Modeling (SEM) was applied. A psychometric test which is the Ability Test in Employment (ATIEm) was used as the instrument to measure the existence of nine types of intelligence of 137 University Teknikal Malaysia Melaka (UTeM) management graduates for job placement purposes. The initial measurement model contains nine unobserved variables and each unobserved variable is measured by ten observed variables. Finally, the modified measurement model deemed to improve the Normed chi-square (NC) = 1.331; Incremental Fit Index (IFI) = 0.940 and Root Mean Square of Approximation (RMSEA) = 0.049 was developed. The findings showed that the UTeM management graduates possessed all nine intelligences either high or low. Musical intelligence, mathematical/logical intelligence, naturalist intelligence and spiritual intelligence contributed highest loadings on certain items. However, most of the intelligences such as bodily kinaesthetic intelligence, visual/spatial intelligence, verbal/linguistic intelligence interpersonal intelligence and intrapersonal intelligence possessed by UTeM management graduates are just at the borderline.
Ultra-High Rate Measurements of Spent Fuel Gamma-Ray Emissions
NASA Astrophysics Data System (ADS)
Rodriguez, Douglas; Vandevender, Brent; Wood, Lynn; Glasgow, Brian; Taubman, Matthew; Wright, Michael; Dion, Michael; Pitts, Karl; Runkle, Robert; Campbell, Luke; Fast, James
2014-03-01
Presently there are over 200,000 irradiated spent nuclear fuel (SNF) assemblies in the world, each containing a concerning amount of weapons-usable material. Both facility operators and safeguards inspectors want to improve composition determination. Current measurements are expensive and difficult so new methods are developed through models. Passive measurements are limited since a few specific decay products and the associated down-scatter overwhelm the gamma rays of interest. Active interrogation methods produce gamma rays beyond 3 MeV, minimizing the impact of the passive emissions that drop off sharply above this energy. New devices like the Ultra-High Rate Germanium (UHRGe) detector are being developed to advance these novel measurement methods. Designed for reasonable resolution at 106 s-1 output rates (compared to ~ 1 - 10 e 3 s-1 standards), SNF samples were directly measured using UHRGe and compared to models. Model verification further enables using Los Alamos National Laboratory SNF assembly models, developed under the Next Generation Safeguards Initiative, to determine emission and signal expectations. Measurement results and future application requirements for UHRGe will be discussed.
Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha
2013-09-01
Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
HIGH TIME-RESOLVED COMPARISONS FOR IN-DEPTH PROBING OF CMAQ FINE-PARTICLES AND GAS PREDICTIONS
Model evaluation is important to develop confidence in models and develop an understanding of their predictions. Most comparisons in the U.S. involve time-integrated measurements of 24-hours or longer. Comparisons against continuous or semi-continuous particle and gaseous measur...
NASA Technical Reports Server (NTRS)
Hiser, H. W.; Lee, S. S.; Veziroglu, T. N.; Sengupta, S.
1975-01-01
A comprehensive numerical model development program for near-field thermal plume discharge and far field general circulation in coastal regions is being carried on at the University of Miami Clean Energy Research Institute. The objective of the program is to develop a generalized, three-dimensional, predictive model for thermal pollution studies. Two regions of specific application of the model are the power plants sites at the Biscayne Bay and Hutchinson Island area along the Florida coastline. Remote sensing from aircraft as well as satellites are used in parallel with in situ measurements to provide information needed for the development and verification of the mathematical model. This paper describes the efforts that have been made to identify problems and limitations of the presently available satellite data and to develop methods for enhancing and enlarging thermal infrared displays for mesoscale sea surface temperature measurements.
NASA Technical Reports Server (NTRS)
Bose, Deepak
2012-01-01
The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above
Large scale hydro-economic modelling for policy support
NASA Astrophysics Data System (ADS)
de Roo, Ad; Burek, Peter; Bouraoui, Faycal; Reynaud, Arnaud; Udias, Angel; Pistocchi, Alberto; Lanzanova, Denis; Trichakis, Ioannis; Beck, Hylke; Bernhard, Jeroen
2014-05-01
To support European Union water policy making and policy monitoring, a hydro-economic modelling environment has been developed to assess optimum combinations of water retention measures, water savings measures, and nutrient reduction measures for continental Europe. This modelling environment consists of linking the agricultural CAPRI model, the LUMP land use model, the LISFLOOD water quantity model, the EPIC water quality model, the LISQUAL combined water quantity, quality and hydro-economic model, and a multi-criteria optimisation routine. With this modelling environment, river basin scale simulations are carried out to assess the effects of water-retention measures, water-saving measures, and nutrient-reduction measures on several hydro-chemical indicators, such as the Water Exploitation Index (WEI), Nitrate and Phosphate concentrations in rivers, the 50-year return period river discharge as an indicator for flooding, and economic losses due to water scarcity for the agricultural sector, the manufacturing-industry sector, the energy-production sector and the domestic sector, as well as the economic loss due to flood damage. Recently, this model environment is being extended with a groundwater model to evaluate the effects of measures on the average groundwater table and available resources. Also, water allocation rules are addressed, while having environmental flow included as a minimum requirement for the environment. Economic functions are currently being updated as well. Recent development and examples will be shown and discussed, as well as open challenges.
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
Length-free near infrared measurement of newborn malnutrition
NASA Astrophysics Data System (ADS)
Mustafa, Fatin Hamimi; Bek, Emily J.; Huvanandana, Jacqueline; Jones, Peter W.; Carberry, Angela E.; Jeffery, Heather E.; Jin, Craig T.; McEwan, Alistair L.
2016-11-01
Under-nutrition in neonates can cause immediate mortality, impaired cognitive development and early onset adult disease. Body fat percentage measured using air-displacement-plethysmography has been found to better indicate under-nutrition than conventional birth weight percentiles. However, air-displacement-plethysmography equipment is expensive and non-portable, so is not suited for use in developing communities where the burden is often the greatest. We proposed a new body fat measurement technique using a length-free model with near-infrared spectroscopy measurements on a single site of the body - the thigh. To remove the need for length measurement, we developed a model with five discrete wavelengths and a sex parameter. The model was developed using air-displacement-plethysmography measurements in 52 neonates within 48 hours of birth. We identified instrumentation required in a low-cost LED-based screening device and incorporated a receptor device that can increase the amount of light collected. This near-infrared method may be suitable as a low cost screening tool for detecting body fat levels and monitoring nutritional interventions for malnutrition in neonates and young children in resource-constrained communities.
Neural Network Modeling for Gallium Arsenide IC Fabrication Process and Device Characteristics.
NASA Astrophysics Data System (ADS)
Creech, Gregory Lee, I.
This dissertation presents research focused on the utilization of neurocomputing technology to achieve enhanced yield and effective yield prediction in integrated circuit (IC) manufacturing. Artificial neural networks are employed to model complex relationships between material and device characteristics at critical stages of the semiconductor fabrication process. Whole wafer testing was performed on the starting substrate material and during wafer processing at four critical steps: Ohmic or Post-Contact, Post-Recess, Post-Gate and Final, i.e., at completion of fabrication. Measurements taken and subsequently used in modeling include, among others, doping concentrations, layer thicknesses, planar geometries, layer-to-layer alignments, resistivities, device voltages, and currents. The neural network architecture used in this research is the multilayer perceptron neural network (MLPNN). The MLPNN is trained in the supervised mode using the generalized delta learning rule. It has one hidden layer and uses continuous perceptrons. The research focuses on a number of different aspects. First is the development of inter-process stage models. Intermediate process stage models are created in a progressive fashion. Measurements of material and process/device characteristics taken at a specific processing stage and any previous stages are used as input to the model of the next processing stage characteristics. As the wafer moves through the fabrication process, measurements taken at all previous processing stages are used as input to each subsequent process stage model. Secondly, the development of neural network models for the estimation of IC parametric yield is demonstrated. Measurements of material and/or device characteristics taken at earlier fabrication stages are used to develop models of the final DC parameters. These characteristics are computed with the developed models and compared to acceptance windows to estimate the parametric yield. A sensitivity analysis is performed on the models developed during this yield estimation effort. This is accomplished by analyzing the total disturbance of network outputs due to perturbed inputs. When an input characteristic bears no, or little, statistical or deterministic relationship to the output characteristics, it can be removed as an input. Finally, neural network models are developed in the inverse direction. Characteristics measured after the final processing step are used as the input to model critical in-process characteristics. The modeled characteristics are used for whole wafer mapping and its statistical characterization. It is shown that this characterization can be accomplished with minimal in-process testing. The concepts and methodologies used in the development of the neural network models are presented. The modeling results are provided and compared to the actual measured values of each characteristic. An in-depth discussion of these results and ideas for future research are presented.
ERIC Educational Resources Information Center
Wei, Silin; Liu, Xiufeng; Jia, Yuane
2014-01-01
Scientific models and modeling play an important role in science, and students' understanding of scientific models is essential for their understanding of scientific concepts. The measurement instrument of "Students' Understanding of Models in Science" (SUMS), developed by Treagust, Chittleborough & Mamiala ("International…
Sexing California gulls using morphometrics and discriminant function analysis
Herring, Garth; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Takekawa, John Y.
2010-01-01
A discriminant function analysis (DFA) model was developed with DNA sex verification so that external morphology could be used to sex 203 adult California Gulls (Larus californicus) in San Francisco Bay (SFB). The best model was 97% accurate and included head-to-bill length, culmen depth at the gonys, and wing length. Using an iterative process, the model was simplified to a single measurement (head-to-bill length) that still assigned sex correctly 94% of the time. A previous California Gull sex determination model developed for a population in Wyoming was then assessed by fitting SFB California Gull measurement data to the Wyoming model; this new model failed to converge on the same measurements as those originally used by the Wyoming model. Results from the SFB discriminant function model were compared to the Wyoming model results (by using SFB data with the Wyoming model); the SFB model was 7% more accurate for SFB California gulls. The simplified DFA model (head-to-bill length only) provided highly accurate results (94%) and minimized the measurements and time required to accurately sex California Gulls.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2005-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2004-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Modeling and Measurement of Correlation between Blood and Interstitial Glucose Changes
Shi, Ting; Li, Dachao; Li, Guoqing; Zhang, Yiming; Xu, Kexin; Lu, Luo
2016-01-01
One of the most effective methods for continuous blood glucose monitoring is to continuously measure glucose in the interstitial fluid (ISF). However, multiple physiological factors can modulate glucose concentrations and affect the lag phase between blood and ISF glucose changes. This study aims to develop a compensatory tool for measuring the delay in ISF glucose variations in reference to blood glucose changes. A theoretical model was developed based on biophysics and physiology of glucose transport in the microcirculation system. Blood and interstitial fluid glucose changes were measured in mice and rats by fluorescent and isotope methods, respectively. Computer simulation mimicked curves were fitted with data resulting from fluorescent measurements of mice and isotope measurements of rats, indicating that there were lag times for ISF glucose changes. It also showed that there was a required diffusion distance for glucose to travel from center of capillaries to interstitial space in both mouse and rat models. We conclude that it is feasible with the developed model to continuously monitor dynamic changes of blood glucose concentration through measuring glucose changes in ISF with high accuracy, which requires correct parameters for determining and compensating for the delay time of glucose changes in ISF. PMID:27239479
ACME-III and ACME-IV Final Campaign Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biraud, S. C.
2016-01-01
The goals of the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s third and fourth Airborne Carbon Measurements (ACME) field campaigns, ACME-III and ACME-IV, are: 1) to measure and model the exchange of CO 2, water vapor, and other greenhouse gases by the natural, agricultural, and industrial ecosystems of the Southern Great Plains (SGP) region; 2) to develop quantitative approaches to relate these local fluxes to the concentration of greenhouse gases measured at the Central Facility tower and in the atmospheric column above the ARM SGP Central Facility, 3) to develop and test bottom-up measurement and modeling approaches to estimate regionalmore » scale carbon balances, and 4) to develop and test inverse modeling approaches to estimate regional scale carbon balance and anthropogenic sources over continental regions. Regular soundings of the atmosphere from near the surface into the mid-troposphere are essential for this research.« less
An Introduction to the Partial Credit Model for Developing Nursing Assessments.
ERIC Educational Resources Information Center
Fox, Christine
1999-01-01
Demonstrates how the partial credit model, a variation of the Rasch Measurement Model, can be used to develop performance-based assessments for nursing education. Applies the model using the Practical Knowledge Inventory for Nurses. (SK)
ERIC Educational Resources Information Center
Anderson, Elizabeth
2017-01-01
Student engagement has been shown to be essential to the development of research-based best practices for K-12 education. It has been defined and measured in numerous ways. The purpose of this research study was to develop a measure of online student engagement for grades 3 through 8 using a partial credit Rasch model and validate the measure…
Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Lu, Shuai; Singh, Ruchi
2011-09-23
Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less
Estimation of an Occupational Choice Model when Occupations Are Misclassified
ERIC Educational Resources Information Center
Sullivan, Paul
2009-01-01
This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
Payne, Courtney E; Wolfrum, Edward J
2015-01-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.
The economic impact of NASA R and D spending: Executive summary
NASA Technical Reports Server (NTRS)
Evans, M. K.
1976-01-01
An evaluation of the economic impact of NASA research and development programs is made. The methodology and the results revolve around the interrelationships existing between the demand and supply effects of increased research and development spending, in particular, NASA research and development spending. The INFORUM Inter-Industry Forecasing Model is used to measure the short-run economic impact of alternative levels of NASA expenditures for 1975. An aggregate production function approach is used to develop the data series necessary to measure the impact of NASA research and development spending, and other determinants of technological progress, on the rate of growth in productivity of the U. S. economy. The measured relationship between NASA research and development spending and technological progress is simulated in the Chase Macroeconometric Model to measure the immediate, intermediate, and long-run economic impact of increased NASA research and development spending over a sustained period.
Development and Validation of a Consumer Quality Assessment Instrument for Dentistry.
ERIC Educational Resources Information Center
Johnson, Jeffrey D.; And Others
1990-01-01
This paper reviews the literature on consumer involvement in dental quality assessment, argues for inclusion of this information in quality assessment measures, outlines a conceptual model for measuring dental consumer quality assessment, and presents data relating to the development and validation of an instrument based on the conceptual model.…
ERIC Educational Resources Information Center
Koustelios, Athanasios D.; Bagiatis, Konstantinos
1997-01-01
An instrument to measure employee job satisfaction in Greece was developed and tested with 212 and 516 employees. Exploratory factor analysis indicated a six-factor solution with high internal consistency. Structural equation modeling showed a fairly good fit to the model, with need for slight improvement. (SLD)
Compensating for pneumatic distortion in pressure sensing devices
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1990-01-01
A technique of compensating for pneumatic distortion in pressure sensing devices was developed and verified. This compensation allows conventional pressure sensing technology to obtain improved unsteady pressure measurements. Pressure distortion caused by frictional attenuation and pneumatic resonance within the sensing system makes obtaining unsteady pressure measurements by conventional sensors difficult. Most distortion occurs within the pneumatic tubing which transmits pressure impulses from the aircraft's surface to the measurement transducer. To avoid pneumatic distortion, experiment designers mount the pressure sensor at the surface of the aircraft, (called in-situ mounting). In-situ transducers cannot always fit in the available space and sometimes pneumatic tubing must be run from the aircraft's surface to the pressure transducer. A technique to measure unsteady pressure data using conventional pressure sensing technology was developed. A pneumatic distortion model is reduced to a low-order, state-variable model retaining most of the dynamic characteristics of the full model. The reduced-order model is coupled with results from minimum variance estimation theory to develop an algorithm to compensate for the effects of pneumatic distortion. Both postflight and real-time algorithms are developed and evaluated using simulated and flight data.
Cho, In-Jeong; Sung, Ji Min; Chang, Hyuk-Jae; Chung, Namsik; Kim, Hyeon Chang
2017-11-01
Increasing evidence suggests that repeatedly measured cardiovascular disease (CVD) risk factors may have an additive predictive value compared with single measured levels. Thus, we evaluated the incremental predictive value of incorporating periodic health screening data for CVD prediction in a large nationwide cohort with periodic health screening tests. A total of 467 708 persons aged 40 to 79 years and free from CVD were randomly divided into development (70%) and validation subcohorts (30%). We developed 3 different CVD prediction models: a single measure model using single time point screening data; a longitudinal average model using average risk factor values from periodic screening data; and a longitudinal summary model using average values and the variability of risk factors. The development subcohort included 327 396 persons who had 3.2 health screenings on average and 25 765 cases of CVD over 12 years. The C statistics (95% confidence interval [CI]) for the single measure, longitudinal average, and longitudinal summary models were 0.690 (95% CI, 0.682-0.698), 0.695 (95% CI, 0.687-0.703), and 0.752 (95% CI, 0.744-0.760) in men and 0.732 (95% CI, 0.722-0.742), 0.735 (95% CI, 0.725-0.745), and 0.790 (95% CI, 0.780-0.800) in women, respectively. The net reclassification index from the single measure model to the longitudinal average model was 1.78% in men and 1.33% in women, and the index from the longitudinal average model to the longitudinal summary model was 32.71% in men and 34.98% in women. Using averages of repeatedly measured risk factor values modestly improves CVD predictability compared with single measurement values. Incorporating the average and variability information of repeated measurements can lead to great improvements in disease prediction. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02931500. © 2017 American Heart Association, Inc.
Karasz, Alison; Patel, Viraj; Kabita, Mahbhooba; Shimu, Parvin
2013-01-01
Although common mental disorder (CMD) is highly prevalent among South Asian immigrant women, they rarely seek mental treatment. This may be owing in part to the lack of conceptual synchrony between medical models of mental disorder and the social models of distress common in South Asian communities. Furthermore, common mental health screening and diagnostic measures may not adequately capture distress in this group. Community-based participatory research (CBPR) is ideally suited to help address measurement issues in CMD as well as to develop culturally appropriate treatment models. To use participatory methods to identify an appropriate, culturally specific mental health syndrome and develop an instrument to measure this syndrome. We formed a partnership between researchers, clinicians, and community members. The partnership selected a culturally specific model of emotional distress/illness, "tension," as a focus for further study. Partners developed a scale to measure Tension and tested the new scale on 162 Bangladeshi immigrant women living in the Bronx. The 24-item "Tension Scale" had high internal consistency (α = 0.83). On bivariate analysis, the scale significantly correlated in the expected direction with depressed as measured by the Patient Health Questionnaire (PHQ-2), age, education, self-rated health, having seen a physician in the past year, and other variables. Using participatory techniques, we created a new measure designed to assess CMD in an isolated immigrant group. The new measure shows excellent psychometric properties and will be helpful in the implementation of a community-based, culturally synchronous intervention for depression. We describe a useful strategy for the rapid development and field testing of culturally appropriate measures of mental distress and disorder.
Karasz, Alison; Patel, Viraj; Kabita, Mahbhooba; Shimu, Parvin
2015-01-01
Background Though common mental disorder (CMD) is highly prevalent among South Asian immigrant women, they rarely seek mental treatment. This may be due in part to the lack of conceptual synchrony between medical models of mental disorder and the social models of distress common in South Asian communities. Furthermore, common mental health screening and diagnostic measures may not adequately capture distress in this group. CBPR is ideally suited to help address measurement issues in CMD as well as develop culturally appropriate treatment models. Objectives To use participatory methods to identify an appropriate, culturally specific mental health syndrome and develop an instrument to measure this syndrome. Methods We formed a partnership between researchers, clinicians, and community members. The partnership selected a culturally specific model of emotional distress/ illness, “Tension,” as a focus for further study. Partners developed a scale to measure Tension and tested the new scale on 162 Bangladeshi immigrant women living in the Bronx. Results The 24-item “Tension Scale” had high internal consistency (alpha =0.83). In bivariate analysis, the scale significantly correlated in the expected direction with depressed as measured by the PHQ-2, age, education, self-rated health, having seen a physician in the past year, and other variables. Conclusions Using participatory techniques, we created a new measure designed to assess common mental disorder in an isolated immigrant group. The new measure shows excellent psychometric properties and will be helpful in the implementation of a community-based, culturally synchronous intervention for depression. We describe a useful strategy for the rapid development and field testing of culturally appropriate measures of mental distress and disorder. PMID:24375184
Schwartz, Jennifer; Wang, Yongfei; Qin, Li; Schwamm, Lee H; Fonarow, Gregg C; Cormier, Nicole; Dorsey, Karen; McNamara, Robert L; Suter, Lisa G; Krumholz, Harlan M; Bernheim, Susannah M
2017-11-01
The Centers for Medicare & Medicaid Services publicly reports a hospital-level stroke mortality measure that lacks stroke severity risk adjustment. Our objective was to describe novel measures of stroke mortality suitable for public reporting that incorporate stroke severity into risk adjustment. We linked data from the American Heart Association/American Stroke Association Get With The Guidelines-Stroke registry with Medicare fee-for-service claims data to develop the measures. We used logistic regression for variable selection in risk model development. We developed 3 risk-standardized mortality models for patients with acute ischemic stroke, all of which include the National Institutes of Health Stroke Scale score: one that includes other risk variables derived only from claims data (claims model); one that includes other risk variables derived from claims and clinical variables that could be obtained from electronic health record data (hybrid model); and one that includes other risk variables that could be derived only from electronic health record data (electronic health record model). The cohort used to develop and validate the risk models consisted of 188 975 hospital admissions at 1511 hospitals. The claims, hybrid, and electronic health record risk models included 20, 21, and 9 risk-adjustment variables, respectively; the C statistics were 0.81, 0.82, and 0.79, respectively (as compared with the current publicly reported model C statistic of 0.75); the risk-standardized mortality rates ranged from 10.7% to 19.0%, 10.7% to 19.1%, and 10.8% to 20.3%, respectively; the median risk-standardized mortality rate was 14.5% for all measures; and the odds of mortality for a high-mortality hospital (+1 SD) were 1.51, 1.52, and 1.52 times those for a low-mortality hospital (-1 SD), respectively. We developed 3 quality measures that demonstrate better discrimination than the Centers for Medicare & Medicaid Services' existing stroke mortality measure, adjust for stroke severity, and could be implemented in a variety of settings. © 2017 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas
2017-04-01
To evaluate the performance of intraocular lenses to treat cataract, an optomechanical eye model was developed. One of the most crucial components is the IOL holder, which should guarantee a physiological representation of the capsular bag and a stable position during measurement sequences. Individual holders are required due to the fact that every IOL has different geometric parameters. A method which allows obtaining the correct dimensions for the holder of a special IOL was developed and tested, by verifying the position of the IOL before and after a measurement sequence. Results of telecentric measurements and MTF measurements show that the IOL position does not change during the displacement sequence induced by the stepper motors of the eye model.
ERIC Educational Resources Information Center
Fulmer, Gavin W.; Liang, Ling L.
2013-01-01
This study tested a student survey to detect differences in instruction between teachers in a modeling-based science program and comparison group teachers. The Instructional Activities Survey measured teachers' frequency of modeling, inquiry, and lecture instruction. Factor analysis and Rasch modeling identified three subscales, Modeling and…
The School Implementation Scale: Measuring Implementation in Response to Intervention Models
ERIC Educational Resources Information Center
Erickson, Amy Gaumer; Noonan, Pattie M.; Jenson, Ronda
2012-01-01
Models of response to intervention (RTI) have been widely developed and implemented and have expanded to include integrated academic/behavior RTI models. Until recently, evaluation of model effectiveness has focused primarily on student-level data, but additional measures of treatment integrity within these multi-tiered models are emerging to…
ERIC Educational Resources Information Center
Delhees, Karl H.; And Others
1970-01-01
Reports basic research on the meaning and measurement of family attitudes and the development of a new measurement instrument, the Family Motivation Test. Theoretical system is called the Investment-Subsidation Model. (Author/DB)
Load Measurement in Structural Members Using Guided Acoustic Waves
NASA Astrophysics Data System (ADS)
Chen, Feng; Wilcox, Paul D.
2006-03-01
A non-destructive technique to measure load in structures such as rails and bridge cables by using guided acoustic waves is investigated both theoretically and experimentally. Robust finite element models for predicting the effect of load on guided wave propagation are developed and example results are presented for rods. Reasonably good agreement of experimental results with modelling prediction is obtained. The measurement technique has been developed to perform tests on larger specimens.
The Prediction of Noise Due to Jet Turbulence Convecting Past Flight Vehicle Trailing Edges
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2014-01-01
High intensity acoustic radiation occurs when turbulence convects past airframe trailing edges. A mathematical model is developed to predict this acoustic radiation. The model is dependent on the local flow and turbulent statistics above the trailing edge of the flight vehicle airframe. These quantities are dependent on the jet and flight vehicle Mach numbers and jet temperature. A term in the model approximates the turbulent statistics of single-stream heated jet flows and is developed based upon measurement. The developed model is valid for a wide range of jet Mach numbers, jet temperature ratios, and flight vehicle Mach numbers. The model predicts traditional trailing edge noise if the jet is not interacting with the airframe. Predictions of mean-flow quantities and the cross-spectrum of static pressure near the airframe trailing edge are compared with measurement. Finally, predictions of acoustic intensity are compared with measurement and the model is shown to accurately capture the phenomenon.
Marfeo, Elizabeth E.; Haley, Stephen M.; Jette, Alan M.; Eisen, Susan V.; Ni, Pengsheng; Bogusz, Kara; Meterko, Mark; McDonough, Christine M.; Chan, Leighton; Brandt, Diane E.; Rasch, Elizabeth K.
2014-01-01
Physical and mental impairments represent the two largest health condition categories for which workers receive Social Security disability benefits. Comprehensive assessment of physical and mental impairments should include aspects beyond medical conditions such as a person’s underlying capabilities as well as activity demands relevant to the context of work. The objective of this paper is to describe the initial conceptual stages of developing new measurement instruments of behavioral health and physical functioning relevant for Social Security work disability evaluation purposes. To outline a clear conceptualization of the constructs to be measured, two content models were developed using structured and informal qualitative approaches. We performed a structured literature review focusing on work disability and incorporating aspects of the International Classification of Functioning, Disability, and Health (ICF) as a unifying taxonomy for framework development. Expert interviews provided advice and consultation to enhance face validity of the resulting content models. The content model for work-related behavioral health function identifies five major domains (1) Behavior Control, (2) Basic Interactions, (3) Temperament and Personality, (4) Adaptability, and (5) Workplace Behaviors. The content model describing physical functioning includes three domains (1) Changing and Maintaining Body Position, (2) Whole Body Mobility, and (3) Carrying, Moving and Handling Objects. These content models informed subsequent measurement properties including item development, measurement scale construction, and provided conceptual coherence guiding future empirical inquiry. The proposed measurement approaches show promise to comprehensively and systematically assess physical and behavioral health functioning relevant to work. PMID:23548543
A refined method of modeling atmospheric dust concentrations due to wind erosion was developed using real-time saltation flux measurements and ambient dust monitoring data at Owens Lake, California. This modeling method may have practical applications for modeling the atmospheric...
NASA Technical Reports Server (NTRS)
Jackson, C. M., Jr.; Summerfield, D. G. (Inventor)
1974-01-01
The design and development of a wind tunnel model equipped with pressure measuring devices are discussed. The pressure measuring orifices are integrally constructed in the wind tunnel model and do not contribute to distortions of the aerodynamic surface. The construction of a typical model is described and a drawing of the device is included.
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
Interpreting Variance Components as Evidence for Reliability and Validity.
ERIC Educational Resources Information Center
Kane, Michael T.
The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…
Evaluating Comprehensive School Reform Models at Scale: Focus on Implementation
ERIC Educational Resources Information Center
Vernez, Georges; Karam, Rita; Mariano, Louis T.; DeMartini, Christine
2006-01-01
This study was designed to fill the "implementation measurement" gap. A methodology to quantitatively measure the level of Comprehensive School Reform (CSR) implementation that can be used across a variety of CSR models was developed, and then applied to measure actual implementation of four different CSR models in a large number of schools. The…
Model Attitude and Deformation Measurements at the NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Woike, Mark R.
2008-01-01
The NASA Glenn Research Center is currently participating in an American Institute of Aeronautics and Astronautics (AIAA) sponsored Model Attitude and Deformation Working Group. This working group is chartered to develop a best practices document dealing with the measurement of two primary areas of wind tunnel measurements, 1) model attitude including alpha, beta and roll angle, and 2) model deformation. Model attitude is a principle variable in making aerodynamic and force measurements in a wind tunnel. Model deformation affects measured forces, moments and other measured aerodynamic parameters. The working group comprises of membership from industry, academia, and the Department of Defense (DoD). Each member of the working group gave a presentation on the methods and techniques that they are using to make model attitude and deformation measurements. This presentation covers the NASA Glenn Research Center s approach in making model attitude and deformation measurements.
ERIC Educational Resources Information Center
Arnold, Holly Weber
2013-01-01
This study examines the relationship between delivery models (the class size reduction model and the sheltered instruction model) and language development levels on the grade-level reading development of sixth-grade English learners (ELs) attending public middle schools in metro Atlanta, Georgia. The instrument used to measure grade-level mastery…
NASA Astrophysics Data System (ADS)
Kusrini, Elisa; Subagyo; Aini Masruroh, Nur
2016-01-01
This research is a sequel of the author's earlier conducted researches in the fields of designing of integrated performance measurement between supply chain's actors and regulator. In the previous paper, the design of performance measurement is done by combining Balanced Scorecard - Supply Chain Operation Reference - Regulator Contribution model and Data Envelopment Analysis. This model referred as B-S-Rc-DEA model. The combination has the disadvantage that all the performance variables have the same weight. This paper investigates whether by giving weight to performance variables will produce more sensitive performance measurement in detecting performance improvement. Therefore, this paper discusses the development of the model B-S-Rc-DEA by giving weight to its performance'variables. This model referred as Scale B-S-Rc-DEA model. To illustrate the model of development, some samples from small medium enterprises of leather craft industry supply chain in province of Yogyakarta, Indonesia are used in this research. It is found that Scale B-S-Rc-DEA model is more sensitive to detecting performance improvement than B-S- Rc-DEA model.
Nonlinear friction model for servo press simulation
NASA Astrophysics Data System (ADS)
Ma, Ninshu; Sugitomo, Nobuhiko; Kyuno, Takunori; Tamura, Shintaro; Naka, Tetsuo
2013-12-01
The friction coefficient was measured under an idealized condition for a pulse servo motion. The measured friction coefficient and its changing with both sliding distance and a pulse motion showed that the friction resistance can be reduced due to the re-lubrication during unloading process of the pulse servo motion. Based on the measured friction coefficient and its changes with sliding distance and re-lubrication of oil, a nonlinear friction model was developed. Using the newly developed the nonlinear friction model, a deep draw simulation was performed and the formability was evaluated. The results were compared with experimental ones and the effectiveness was verified.
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon.
Muehleisena, Ralph T; Beamer, C Walter; Tinianov, Brandon D
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10,782 Pa s m(-2) in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed.
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter; Tinianov, Brandon D.
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10 782 Pa s m-2 in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed. .
Remote measurement of soil moisture over vegetation using infrared temperature measurements
NASA Technical Reports Server (NTRS)
Carlson, Toby N.
1991-01-01
Better methods for remote sensing of surface evapotranspiration, soil moisture, and fractional vegetation cover were developed. The objectives were to: (1) further develop a model of water movement through the soil/plant/atmosphere system; (2) use this model, in conjunction with measurements of infrared surface temperature and vegetation fraction; (3) determine the magnitude of radiometric temperature response to water stress in vegetation; (4) show at what point one can detect that sensitivity to water stress; and (5) determine the practical limits of the methods. A hydrological model that can be used to calculate soil water content versus depth given conventional meteorological records and observations of vegetation cover was developed. An outline of the results of these initiatives is presented.
Pollard, Beth; Johnston, Marie; Dixon, Diane
2007-01-01
Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the clinician report measures appeared less well developed. It would be of value if new measures defined the construct of interest and, that the construct, be part of theoretical model. By ensuring measures are both theoretically and empirically valid then improvements in subjective health outcome measures should be possible. PMID:17343739
Cabin Atmosphere Monitoring System (CAMS), pre-prototype model development continuation
NASA Technical Reports Server (NTRS)
Bursack, W. W.; Harris, W. A.
1975-01-01
The development of the Cabin Atmosphere Monitoring System (CAMS) is described. Attention was directed toward improving stability and reliability of the design using flight application guidelines. Considerable effort was devoted to the development of a temperature-stable RF/DC generator used for excitation of the quadrupole mass filter. Minor design changes were made in the preprototype model. Specific gas measurement examples are included along with a discussion of the measurement rationale employed.
Development of optical diagnostics for performance evaluation of arcjet thrusters
NASA Technical Reports Server (NTRS)
Cappelli, Mark A.
1995-01-01
Laser and optical emission-based measurements have been developed and implemented for use on low-power hydrogen arcjet thrusters and xenon-propelled electric thrusters. In the case of low power hydrogen arcjets, these laser induce fluorescence measurements constitute the first complete set of data that characterize the velocity and temperature field of such a device. The research performed under the auspices of this NASA grant includes laser-based measurements of atomic hydrogen velocity and translational temperature, ultraviolet absorption measurements of ground state atomic hydrogen, Raman scattering measurements of the electronic ground state of molecular hydrogen, and optical emission based measurements of electronically excited atomic hydrogen, electron number density, and electron temperature. In addition, we have developed a collisional-radiative model of atomic hydrogen for use in conjunction with magnetohydrodynamic models to predict the plasma radiative spectrum, and near-electrode plasma models to better understand current transfer from the electrodes to the plasma. In the final year of the grant, a new program aimed at developing diagnostics for xenon plasma thrusters was initiated, and results on the use of diode lasers for interrogating Hall accelerator plasmas has been presented at recent conferences.
Payne, Courtney E.; Wolfrum, Edward J.
2015-03-12
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Courtney E.; Wolfrum, Edward J.
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.
2016-01-01
Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the U-235/U-238 ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the U-235/U-238 ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. Development of this model has highlighted several important considerations for properly interpreting experimental results.« less
Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.; ...
2015-12-07
Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the 235U/238U ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the 235U/ 238U ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. As a result, development of this model has highlighted several important considerations for properly interpreting experimental results.« less
Model identification of signal transduction networks from data using a state regulator problem.
Gadkar, K G; Varner, J; Doyle, F J
2005-03-01
Advances in molecular biology provide an opportunity to develop detailed models of biological processes that can be used to obtain an integrated understanding of the system. However, development of useful models from the available knowledge of the system and experimental observations still remains a daunting task. In this work, a model identification strategy for complex biological networks is proposed. The approach includes a state regulator problem (SRP) that provides estimates of all the component concentrations and the reaction rates of the network using the available measurements. The full set of the estimates is utilised for model parameter identification for the network of known topology. An a priori model complexity test that indicates the feasibility of performance of the proposed algorithm is developed. Fisher information matrix (FIM) theory is used to address model identifiability issues. Two signalling pathway case studies, the caspase function in apoptosis and the MAP kinase cascade system, are considered. The MAP kinase cascade, with measurements restricted to protein complex concentrations, fails the a priori test and the SRP estimates are poor as expected. The apoptosis network structure used in this work has moderate complexity and is suitable for application of the proposed tools. Using a measurement set of seven protein concentrations, accurate estimates for all unknowns are obtained. Furthermore, the effects of measurement sampling frequency and quality of information in the measurement set on the performance of the identified model are described.
Mesospheric Water Vapor Retrieved from SABER/TIMED Measurements
NASA Technical Reports Server (NTRS)
Feofilov, Arte, G.; Yankovsky, Valentine A.; Marshall, Benjamin T.; Russell, J. M., III; Pesnell, W. D.; Kutepov, Alexander A.; Goldberg, Richard A.; Gordley, Larry L.; Petelina, Svetlama; Mauilova, Rada O.;
2007-01-01
The SABER instrument on board the TIMED satellite is a limb scanning infrared radiometer designed to measure temperature and minor constituent vertical profiles and energetics parameters in the mesosphere and lower thermosphere (MLT) The H2O concentrations are retrieved from 6.3 micron band radiances. The interpretation of this radiance requires developing a non-LTE H2O model that includes energy exchange processes with the system of O3 and O2 vibrational levels populated at the daytime through a number of photoabsorption and photodissociation processes. We developed a research model base on an extended H2O non-LTE model of Manuilova coupled with the novel model of the electronic kinetics of the O2 and O3 photolysis products suggested by Yankosvky and Manuilova. The performed study of this model helped u to develop and test an optimized operational model for interpretation of SABER 6.3 micron band radiances. The sensitivity of retrievals to the parameters of the model is discussed. The H2O retrievals are compared to other measurements for different seasons and locations.
NASA Technical Reports Server (NTRS)
Clayton, Joseph P.; Tinker, Michael L.
1991-01-01
This paper describes experimental and analytical characterization of a new flexible thermal protection material known as Tailorable Advanced Blanket Insulation (TABI). This material utilizes a three-dimensional ceramic fabric core structure and an insulation filler. TABI is the leading candidate for use in deployable aeroassisted vehicle designs. Such designs require extensive structural modeling, and the most significant in-plane material properties necessary for model development are measured and analytically verified in this study. Unique test methods are developed for damping measurements. Mathematical models are developed for verification of the experimental modulus and damping data, and finally, transverse properties are described in terms of the inplane properties through use of a 12-dof finite difference model of a simple TABI configuration.
Modeling spray/puddle dissolution processes for deep-ultraviolet acid-hardened resists
NASA Astrophysics Data System (ADS)
Hutchinson, John M.; Das, Siddhartha; Qian, Qi-De; Gaw, Henry T.
1993-10-01
A study of the dissolution behavior of acid-hardened resists (AHR) was undertaken for spray and spray/puddle development processes. The Site Services DSM-100 end-point detection system is used to measure both spray and puddle dissolution data for a commercially available deep-ultraviolet AHR resist, Shipley SNR-248. The DSM allows in situ measurement of dissolution rate on the wafer chuck and hence allows parameter extraction for modeling spray and puddle processes. The dissolution data for spray and puddle processes was collected across a range of exposure dose and postexposure bake temperature. The development recipe was varied to decouple the contribution of the spray and puddle modes to the overall dissolution characteristics. The mechanisms involved in spray versus puddle dissolution and the impact of spray versus puddle dissolution on process performance metrics has been investigated. We used the effective-dose-modeling approach and the measurement capability of the DSM-100 and developed a lumped parameter model for acid-hardened resists that incorporates the effects of exposure, postexposure bake temperature and time, and development condition. The PARMEX photoresist-modeling program is used to determine parameters for the spray and for the puddle process. The lumped parameter AHR model developed showed good agreement with experimental data.
Specific model for the estimation of methane emission from municipal solid waste landfills in India.
Kumar, Sunil; Nimchuk, Nick; Kumar, Rakesh; Zietsman, Josias; Ramani, Tara; Spiegelman, Clifford; Kenney, Megan
2016-09-01
The landfill gas (LFG) model is a tool for measuring methane (CH4) generation rates and total CH4 emissions from a particular landfill. These models also have various applications including the sizing of the LFG collection system, evaluating the benefits of gas recovery projects, and measuring and controlling gaseous emissions. This research paper describes the development of a landfill model designed specifically for Indian climatic conditions and the landfill's waste characteristics. CH4, carbon dioxide (CO2), oxygen (O2) and temperature were considered as the prime factor for the development of this model. The developed model was validated for three landfill sites in India: Shillong, Kolkata, and Jaipur. The autocorrelation coefficient for the model was 0.915, while the R(2) value was 0.429. Copyright © 2016 Elsevier Ltd. All rights reserved.
Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition
NASA Technical Reports Server (NTRS)
Ewing, Anthony; Adams, Charles
2004-01-01
Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.
Advances in Projection Moire Interferometry Development for Large Wind Tunnel Applications
NASA Technical Reports Server (NTRS)
Fleming, Gary A.; Soto, Hector L.; South, Bruce W.; Bartram, Scott M.
1999-01-01
An instrument development program aimed at using Projection Moire Interferometry (PMI) for acquiring model deformation measurements in large wind tunnels was begun at NASA Langley Research Center in 1996. Various improvements to the initial prototype PMI systems have been made throughout this development effort. This paper documents several of the most significant improvements to the optical hardware and image processing software, and addresses system implementation issues for large wind tunnel applications. The improvements have increased both measurement accuracy and instrument efficiency, promoting the routine use of PMI for model deformation measurements in production wind tunnel tests.
System Dynamic Analysis of a Wind Tunnel Model with Applications to Improve Aerodynamic Data Quality
NASA Technical Reports Server (NTRS)
Buehrle, Ralph David
1997-01-01
The research investigates the effect of wind tunnel model system dynamics on measured aerodynamic data. During wind tunnel tests designed to obtain lift and drag data, the required aerodynamic measurements are the steady-state balance forces and moments, pressures, and model attitude. However, the wind tunnel model system can be subjected to unsteady aerodynamic and inertial loads which result in oscillatory translations and angular rotations. The steady-state force balance and inertial model attitude measurements are obtained by filtering and averaging data taken during conditions of high model vibrations. The main goals of this research are to characterize the effects of model system dynamics on the measured steady-state aerodynamic data and develop a correction technique to compensate for dynamically induced errors. Equations of motion are formulated for the dynamic response of the model system subjected to arbitrary aerodynamic and inertial inputs. The resulting modal model is examined to study the effects of the model system dynamic response on the aerodynamic data. In particular, the equations of motion are used to describe the effect of dynamics on the inertial model attitude, or angle of attack, measurement system that is used routinely at the NASA Langley Research Center and other wind tunnel facilities throughout the world. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration while testing in the National Transonic Facility at the NASA Langley Research Center. The inertial attitude sensor cannot distinguish between the gravitational acceleration and centrifugal accelerations associated with wind tunnel model system vibration, which results in a model attitude measurement bias error. Bias errors over an order of magnitude greater than the required device accuracy were found in the inertial model attitude measurements during dynamic testing of two model systems. Based on a theoretical modal approach, a method using measured vibration amplitudes and measured or calculated modal characteristics of the model system is developed to correct for dynamic bias errors in the model attitude measurements. The correction method is verified through dynamic response tests on two model systems and actual wind tunnel test data.
A practical measure of workplace resilience: developing the resilience at work scale.
Winwood, Peter C; Colon, Rochelle; McEwen, Kath
2013-10-01
To develop an effective measure of resilience at work for use in individual work-related performance and emotional distress contexts. Two separate cross-sectional studies investigated: (1) exploratory factor analysis of 45 items putatively underpinning workplace resilience among 397 participants and (2) confirmatory factor analysis of resilience measure derived from Study 1 demonstrating a credible model of interaction, with performance outcome variables among 194 participants. A 20-item scale explaining 67% of variance, measuring seven aspects of workplace resilience, which are teachable and capable of conscious development, was achieved. A credible model of relationships with work engagement, sleep, stress recovery, and physical health was demonstrated in the expected directions. The new scale shows considerable promise as a reliable instrument for use in the area of employee support and development.
Stochastic Models of Quality Control on Test Misgrading.
ERIC Educational Resources Information Center
Wang, Jianjun
Stochastic models are developed in this article to examine the rate of test misgrading in educational and psychological measurement. The estimation of inadvertent grading errors can serve as a basis for quality control in measurement. Limitations of traditional Poisson models have been reviewed to highlight the need to introduce new models using…
Academic Self-Concept: Modeling and Measuring for Science
ERIC Educational Resources Information Center
Hardy, Graham
2014-01-01
In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive…
Comparison of measured and modeled BRDF of natural targets
NASA Astrophysics Data System (ADS)
Boucher, Yannick; Cosnefroy, Helene; Petit, Alain D.; Serrot, Gerard; Briottet, Xavier
1999-07-01
The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.
Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, J.R.; et al.
This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators.
Modeling Surface Water Flow in the Atchafalaya Basin
NASA Astrophysics Data System (ADS)
Liu, K.; Simard, M.
2017-12-01
While most of the Mississippi River Delta is sinking due to insufficient sediment supply and subsidence, the stable wetlands and the prograding delta systems in the Atchafalaya Basin provide a unique opportunity to study the constructive interactions between riverine and marine forcings and their impacts upon coastal morphology. To better understand the hydrodynamics in this region, we developed a numerical modeling system for the water flow through the river channel - deltas - wetlands networks in the Atchafalaya Basin. Determining spatially varying model parameters for a large area composed of such diverse land cover types poses a challenge to developing an accurate numerical model. For example, the bottom friction coefficient can not be measured directly and the available elevation maps for the wetlands in the basin are inaccurate. To overcome these obstacles, we developed the modeling system in three steps. Firstly, we modeled river bathymetry based on in situ sonar transects and developed a simplified 1D model for the Wax Lake Outlet using HEC-RAS. Secondly, we used a Bayesian approach to calibrate the model automatically and infer important unknown parameters such as riverbank elevation and bottom friction coefficient through Markov Chain Monte Carlo (MCMC) simulations. We also estimated the wetland elevation based on the distribution of different vegetation species in the basin. Thirdly, with the lessons learnt from the 1D model, we developed a depth-averaged 2D model for the whole Atchafalaya Basin using Delft3D. After calibrations, the model successfully reproduced the water levels measured at five gauges in the Wax Lake Outlet and the modeled water surface profile along the channel agreed reasonably well with our LIDAR measurements. In addition, the model predicted a one-hour delay in tidal phase from the Wax Lake Delta to the upstream gauge. In summary, this project presents a procedure to initialize hydrology model parameters that integrates field measurements, existing knowledge and model sensitivities. The numerical model provides a powerful tool to understand the complex patterns of water flow and exchange in the rivers, tributaries, and wetlands of the Atchafalaya Basin.
2011-01-01
used in efforts to develop QSAR models. Measurement of Repellent Efficacy Screening for Repellency of Compounds with Unknown Toxicology In screening...CPT) were used to develop Quantitative Structure Activity Relationship ( QSAR ) models to predict repellency. Successful prediction of novel...acylpiperidine QSAR models employed 4 descriptors to describe the relationship between structure and repellent duration. The ANN model of the carboxamides did not
Developing a Measure of Traffic Calming Associated with Elementary School Students’ Active Transport
Nicholson, Lisa M.; Turner, Lindsey; Slater, Sandy J.; Abuzayd, Haytham; Chriqui, Jamie F.; Chaloupka, Frank
2014-01-01
The objective of this study is to develop a measure of traffic calming with nationally available GIS data from NAVTEQ and to validate the traffic calming index with the percentage of children reported by school administrators as walking or biking to school, using data from a nationally representative sample of elementary schools in 2006-2010. Specific models, with and without correlated errors, examined associations of objective GIS measures of the built environment, nationally available from NAVTEQ, with the latent construct of traffic calming. The best fit model for the latent traffic calming construct was determined to be a five factor model including objective measures of intersection density, count of medians/dividers, count of low mobility streets, count of roundabouts, and count of on-street parking availability, with no correlated errors among items. This construct also proved to be a good fit for the full measurement model when the outcome measure of percentage of students walking or biking to school was added to the model. The traffic calming measure was strongly, significantly, and positively correlated with the percentage of students reported as walking or biking to school. Applicability of results to public health and transportation policies and practices are discussed. PMID:25506255
Nicholson, Lisa M; Turner, Lindsey; Slater, Sandy J; Abuzayd, Haytham; Chriqui, Jamie F; Chaloupka, Frank
2014-12-01
The objective of this study is to develop a measure of traffic calming with nationally available GIS data from NAVTEQ and to validate the traffic calming index with the percentage of children reported by school administrators as walking or biking to school, using data from a nationally representative sample of elementary schools in 2006-2010. Specific models, with and without correlated errors, examined associations of objective GIS measures of the built environment, nationally available from NAVTEQ, with the latent construct of traffic calming. The best fit model for the latent traffic calming construct was determined to be a five factor model including objective measures of intersection density, count of medians/dividers, count of low mobility streets, count of roundabouts, and count of on-street parking availability, with no correlated errors among items. This construct also proved to be a good fit for the full measurement model when the outcome measure of percentage of students walking or biking to school was added to the model. The traffic calming measure was strongly, significantly, and positively correlated with the percentage of students reported as walking or biking to school. Applicability of results to public health and transportation policies and practices are discussed.
75 FR 75532 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-03
...; --Validation of model training courses; --Unlawful practices associated with certificates of competency... Recommendations for entering enclosed spaces aboard ships; --Development of model procedures for executing shipboard emergency measures; --Development of training standards for recovery systems; --Development of...
Millimeter Wave Radar Clutter Program
1989-10-30
conduct experimental measurments and develop theoretical models to Improve our understanding of electromagnetic wave interaction with terrain at...various types of terrain under a variety of conditions. The experimental data servos to guide the development of the models as well as to verify their... experimental measurement. Task 4 - Examination of Bistatic Scattering from Surfaces and Volumes: Prior to this program, no millimeter-wave bistatic
ERIC Educational Resources Information Center
Stockdale, Susan L.; Brockett, Ralph G.
2011-01-01
The purpose of this study was to develop a reliable and valid instrument to measure self-directedness in learning among college students based on an operationalization of the personal responsibility orientation (PRO) model of self-direction in learning. The resultant 25-item Personal Responsibility Orientation to Self-Direction in Learning Scale…
Measuring Equity in Access to Pharmaceutical Services Using Concentration Curve; Model Development.
Davari, Majid; Khorasani, Elahe; Bakhshizade, Zahra; Jafarian Jazi, Marzie; Ghaffari Darab, Mohsen; Maracy, Mohammad Reza
2015-01-01
This paper has two objectives. First, it establishes a model for scoring the access to pharmaceutical services. Second, it develops a model for measuring socioeconomic indicators independent of the time and place of study. These two measures are used for measuring equity in access to pharmaceutical services using concentration curve. We prepared an open-ended questionnaire and distributed it to academic experts to get their ideas to form access indicators and assign score to each indicator based on the pharmaceutical system. An extensive literature review was undertaken for the selection of indicators in order to determine the socioeconomic status (SES) of individuals. Experts' opinions were also considered for scoring these indicators. These indicators were weighted by the Stepwise Adoption of Weights and were used to develop a model for measuring SES independent of the time and place of study. Nine factors were introduced for assessing the access to pharmaceutical services, based on pharmaceutical systems in middle-income countries. Five indicators were selected for determining the SES of individuals. A model for income classification based on poverty line was established. Likewise, a model for scoring home status based on national minimum wage was introduced. In summary, five important findings emerged from this study. These findings may assist researchers in measuring equity in access to pharmaceutical services and also could help them to apply a model for determining SES independent of the time and place of study. These also could provide a good opportunity for researchers to compare the results of various studies in a reasonable way; particularly in middle-income countries.
Marfeo, Elizabeth E; Haley, Stephen M; Jette, Alan M; Eisen, Susan V; Ni, Pengsheng; Bogusz, Kara; Meterko, Mark; McDonough, Christine M; Chan, Leighton; Brandt, Diane E; Rasch, Elizabeth K
2013-09-01
Physical and mental impairments represent the 2 largest health condition categories for which workers receive Social Security disability benefits. Comprehensive assessment of physical and mental impairments should include aspects beyond medical conditions such as a person's underlying capabilities as well as activity demands relevant to the context of work. The objective of this article is to describe the initial conceptual stages of developing new measurement instruments of behavioral health and physical functioning relevant for Social Security work disability evaluation purposes. To outline a clear conceptualization of the constructs to be measured, 2 content models were developed using structured and informal qualitative approaches. We performed a structured literature review focusing on work disability and incorporating aspects of the International Classification of Functioning, Disability and Health as a unifying taxonomy for framework development. Expert interviews provided advice and consultation to enhance face validity of the resulting content models. The content model for work-related behavioral health function identifies 5 major domains: (1) behavior control, (2) basic interactions, (3) temperament and personality, (4) adaptability, and (5) workplace behaviors. The content model describing physical functioning includes 3 domains: (1) changing and maintaining body position, (2) whole-body mobility, and (3) carrying, moving, and handling objects. These content models informed subsequent measurement properties including item development and measurement scale construction, and provided conceptual coherence guiding future empirical inquiry. The proposed measurement approaches show promise to comprehensively and systematically assess physical and behavioral health functioning relevant to work. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
García-Ramos, F. Javier; Malón, Hugo; Aguirre, A. Javier; Boné, Antonio; Puyuelo, Javier; Vidal, Mariano
2015-01-01
A computational fluid dynamics (CFD) model of the air flow generated by an air-assisted sprayer equipped with two axial fans was developed and validated by practical experiments in the laboratory. The CFD model was developed by considering the total air flow supplied by the sprayer fan to be the main parameter, rather than the outlet air velocity. The model was developed for three air flows corresponding to three fan blade settings and assuming that the sprayer is stationary. Actual measurements of the air velocity near the sprayer were taken using 3D sonic anemometers. The workspace sprayer was divided into three sections, and the air velocity was measured in each section on both sides of the machine at a horizontal distance of 1.5, 2.5, and 3.5 m from the machine, and at heights of 1, 2, 3, and 4 m above the ground The coefficient of determination (R2) between the simulated and measured values was 0.859, which demonstrates a good correlation between the simulated and measured data. Considering the overall data, the air velocity values produced by the CFD model were not significantly different from the measured values. PMID:25621611
García-Ramos, F Javier; Malón, Hugo; Aguirre, A Javier; Boné, Antonio; Puyuelo, Javier; Vidal, Mariano
2015-01-22
A computational fluid dynamics (CFD) model of the air flow generated by an air-assisted sprayer equipped with two axial fans was developed and validated by practical experiments in the laboratory. The CFD model was developed by considering the total air flow supplied by the sprayer fan to be the main parameter, rather than the outlet air velocity. The model was developed for three air flows corresponding to three fan blade settings and assuming that the sprayer is stationary. Actual measurements of the air velocity near the sprayer were taken using 3D sonic anemometers. The workspace sprayer was divided into three sections, and the air velocity was measured in each section on both sides of the machine at a horizontal distance of 1.5, 2.5, and 3.5 m from the machine, and at heights of 1, 2, 3, and 4 m above the ground The coefficient of determination (R2) between the simulated and measured values was 0.859, which demonstrates a good correlation between the simulated and measured data. Considering the overall data, the air velocity values produced by the CFD model were not significantly different from the measured values.
Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2016-10-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.
Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S.; Barros, Aluísio JD; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A
2013-01-01
Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269
A Review of Surface Water Quality Models
Li, Shibei; Jia, Peng; Qi, Changjun; Ding, Feng
2013-01-01
Surface water quality models can be useful tools to simulate and predict the levels, distributions, and risks of chemical pollutants in a given water body. The modeling results from these models under different pollution scenarios are very important components of environmental impact assessment and can provide a basis and technique support for environmental management agencies to make right decisions. Whether the model results are right or not can impact the reasonability and scientificity of the authorized construct projects and the availability of pollution control measures. We reviewed the development of surface water quality models at three stages and analyzed the suitability, precisions, and methods among different models. Standardization of water quality models can help environmental management agencies guarantee the consistency in application of water quality models for regulatory purposes. We concluded the status of standardization of these models in developed countries and put forward available measures for the standardization of these surface water quality models, especially in developing countries. PMID:23853533
A simple, analytical, axisymmetric microburst model for downdraft estimation
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1991-01-01
A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.
Measuring situational awareness and resolving inherent high-level fusion obstacles
NASA Astrophysics Data System (ADS)
Sudit, Moises; Stotz, Adam; Holender, Michael; Tagliaferri, William; Canarelli, Kathie
2006-04-01
Information Fusion Engine for Real-time Decision Making (INFERD) is a tool that was developed to supplement current graph matching techniques in Information Fusion models. Based on sensory data and a priori models, INFERD dynamically generates, evolves, and evaluates hypothesis on the current state of the environment. The a priori models developed are hierarchical in nature lending them to a multi-level Information Fusion process whose primary output provides a situational awareness of the environment of interest in the context of the models running. In this paper we look at INFERD's multi-level fusion approach and provide insight on the inherent problems such as fragmentation in the approach and the research being undertaken to mitigate those deficiencies. Due to the large variance of data in disparate environments, the awareness of situations in those environments can be drastically different. To accommodate this, the INFERD framework provides support for plug-and-play fusion modules which can be developed specifically for domains of interest. However, because the models running in INFERD are graph based, some default measurements can be provided and will be discussed in the paper. Among these are a Depth measurement to determine how much danger is presented by the action taking place, a Breadth measurement to gain information regarding the scale of an attack that is currently happening, and finally a Reliability measure to tell the user the credibility of a particular hypothesis. All of these results will be demonstrated in the Cyber domain where recent research has shown to be an area that is welldefined and bounded, so that new models and algorithms can be developed and evaluated.
Measurement and assessment of carrying capacity of the environment in Ningbo, China.
Liu, R Z; Borthwick, Alistair G L
2011-08-01
Carrying Capacity of the Environment (CCE) provides a useful measure of the sustainable development of a region. Approaches that use integrated assessment instead of measurement can lead to misinterpretation of sustainable development because of confusion between Environmental Stress (ES) indexes and CCE indexes, and the selection of over-simple linear plus models. The present paper proposes a comprehensive measurement system for CCE which comprises models of natural resources capacity, environmental assimilative capacity, ecosystem services capacity, and society supporting capacity. The corresponding measurable indexes are designed to assess CCE using a carrying capacity surplus ratio model and a vector of surplus ratio of carrying capacity model. The former aims at direct comparison of ES and CCE based on the values of basic indexes, and the latter uses a Euclidean vector to assess CCE states. The measurement and assessment approaches are applicable to Strategic Environmental Assessment (SEA) and environmental planning and management. A case study is presented for Ningbo, China, whereby all the basic indexes of ECC are measured and the CCE states assessed for 2005 and 2010. Copyright © 2011 Elsevier Ltd. All rights reserved.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
McNamara, Robert L; Wang, Yongfei; Partovian, Chohreh; Montague, Julia; Mody, Purav; Eddy, Elizabeth; Krumholz, Harlan M; Bernheim, Susannah M
2015-09-01
Electronic health records (EHRs) offer the opportunity to transform quality improvement by using clinical data for comparing hospital performance without the burden of chart abstraction. However, current performance measures using EHRs are lacking. With support from the Centers for Medicare & Medicaid Services (CMS), we developed an outcome measure of hospital risk-standardized 30-day mortality rates for patients with acute myocardial infarction for use with EHR data. As no appropriate source of EHR data are currently available, we merged clinical registry data from the Action Registry-Get With The Guidelines with claims data from CMS to develop the risk model (2009 data for development, 2010 data for validation). We selected candidate variables that could be feasibly extracted from current EHRs and do not require changes to standard clinical practice or data collection. We used logistic regression with stepwise selection and bootstrapping simulation for model development. The final risk model included 5 variables available on presentation: age, heart rate, systolic blood pressure, troponin ratio, and creatinine level. The area under the receiver operating characteristic curve was 0.78. Hospital risk-standardized mortality rates ranged from 9.6% to 13.1%, with a median of 10.7%. The odds of mortality for a high-mortality hospital (+1 SD) were 1.37 times those for a low-mortality hospital (-1 SD). This measure represents the first outcome measure endorsed by the National Quality Forum for public reporting of hospital quality based on clinical data in the EHR. By being compatible with current clinical practice and existing EHR systems, this measure is a model for future quality improvement measures.
Analytical prediction of digital signal crosstalk of FCC
NASA Technical Reports Server (NTRS)
Belleisle, A. P.
1972-01-01
The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.
NASA Technical Reports Server (NTRS)
Stankovic, Ana V.
2003-01-01
Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.
The Longitudinal Guttman Simplex: Applications to Health Behavior Data.
ERIC Educational Resources Information Center
Collins, Linda M.; Dent, Clyde W.
Because health behavior is often concerned with dynamic constructs, a longitudinal approach to measurement is needed. The Longitudinal Guttman Simplex (LGS) is a measurement model developed especially for dynamic constructs exhibiting cumulative, unitary development measured longitudinally. Data from the Television Smoking Prevention Project, a…
Measurement of a model of implementation for health care: toward a testable theory
2012-01-01
Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures. PMID:22759451
ERIC Educational Resources Information Center
Ismail, Yilmaz
2016-01-01
This study aims to develop a semiotic declarative knowledge model, which is a positive constructive behavior model that systematically facilitates understanding in order to ensure that learners think accurately and ask the right questions about a topic. The data used to develop the experimental model were obtained using four measurement tools…
Mootanah, R.; Imhauser, C.W.; Reisse, F.; Carpanen, D.; Walker, R.W.; Koff, M.F.; Lenhoff, M.W.; Rozbruch, S.R.; Fragomen, A.T.; Dewan, Z.; Kirane, Y.M.; Cheah, Pamela A.; Dowell, J.K.; Hillstrom, H.J.
2014-01-01
A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between EE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning. PMID:24786914
Mootanah, R; Imhauser, C W; Reisse, F; Carpanen, D; Walker, R W; Koff, M F; Lenhoff, M W; Rozbruch, S R; Fragomen, A T; Dewan, Z; Kirane, Y M; Cheah, K; Dowell, J K; Hillstrom, H J
2014-01-01
A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between FE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning.
Can dust emission mechanisms be determined from field measurements?
NASA Astrophysics Data System (ADS)
Klose, Martina; Webb, Nicholas; Gill, Thomas E.; Van Pelt, Scott; Okin, Gregory
2017-04-01
Field observations are needed to develop and test theories on dust emission for use in dust modeling systems. The dust emission mechanism (aerodynamic entrainment, saltation bombardment, aggregate disintegration) as well as the amount and particle-size distribution of emitted dust may vary under sediment supply- and transport-limited conditions. This variability, which is caused by heterogeneity of the surface and the atmosphere, cannot be fully captured in either field measurements or models. However, uncertainty in dust emission modeling can be reduced through more detailed observational data on the dust emission mechanism itself. To date, most measurements do not provide enough information to allow for a determination of the mechanisms leading to dust emission and often focus on a small variety of soil and atmospheric settings. Additionally, data sets are often not directly comparable due to different measurement setups. As a consequence, the calibration of dust emission schemes has so far relied on a selective set of observations, which leads to an idealization of the emission process in models and thus affects dust budget estimates. Here, we will present results of a study which aims to decipher the dust emission mechanism from field measurements as an input for future model development. Detailed field measurements are conducted, which allow for a comparison of dust emission for different surface and atmospheric conditions. Measurements include monitoring of the surface, loose erodible material, transported sediment, and meteorological data, and are conducted in different environmental settings in the southwestern United States. Based on the field measurements, a method is developed to differentiate between the different dust emission mechanisms.
Models for Total-Dose Radiation Effects in Non-Volatile Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Philip Montgomery; Wix, Steven D.
The objective of this work is to develop models to predict radiation effects in non- volatile memory: flash memory and ferroelectric RAM. In flash memory experiments have found that the internal high-voltage generators (charge pumps) are the most sensitive to radiation damage. Models are presented for radiation effects in charge pumps that demonstrate the experimental results. Floating gate models are developed for the memory cell in two types of flash memory devices by Intel and Samsung. These models utilize Fowler-Nordheim tunneling and hot electron injection to charge and erase the floating gate. Erase times are calculated from the models andmore » compared with experimental results for different radiation doses. FRAM is less sensitive to radiation than flash memory, but measurements show that above 100 Krad FRAM suffers from a large increase in leakage current. A model for this effect is developed which compares closely with the measurements.« less
NASA Astrophysics Data System (ADS)
Stein, George Juraj; Múčka, Peter; Hinz, Barbara; Blüthner, Ralph
2009-04-01
Laboratory tests were conducted using 13 male subjects seated on a cushioned commercial vehicle driver's seat. The hands gripped a mock-up steering wheel and the subjects were in contact with the lumbar region of the backrest. The accelerations and forces in the y-direction were measured during random lateral whole-body vibration with a frequency range between 0.25 and 30 Hz, vibration magnitudes 0.30, 0.98, and 1.92 m s -2 (unweighted root mean square (rms)). Based on these laboratory measurements, a linear multi-degree-of-freedom (mdof) model of the seated human body and cushioned seat in the lateral direction ( y-axis) was developed. Model parameters were identified from averaged measured apparent mass values (modulus and phase) for the three excitation magnitudes mentioned. A preferred model structure was selected from four 3-dof models analysed. The mean subject parameters were identified. In addition, identification of each subject's apparent mass model parameters was performed. The results are compared with previous studies. The developed model structure and the identified parameters can be used for further biodynamical research in seating dynamics.
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample
ERIC Educational Resources Information Center
Lehrer, Richard
2017-01-01
Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…
A Latent Transition Analysis Model for Assessing Change in Cognitive Skills
ERIC Educational Resources Information Center
Li, Feiming; Cohen, Allan; Bottge, Brian; Templin, Jonathan
2016-01-01
Latent transition analysis (LTA) was initially developed to provide a means of measuring change in dynamic latent variables. In this article, we illustrate the use of a cognitive diagnostic model, the DINA model, as the measurement model in a LTA, thereby demonstrating a means of analyzing change in cognitive skills over time. An example is…
Yu, Ping; Qian, Siyu
2018-01-01
Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables—training, self-efficacy, system quality and information quality—on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time. PMID:29315323
Yu, Ping; Qian, Siyu
2018-01-01
Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables-training, self-efficacy, system quality and information quality-on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time.
One vs. Two Breast Density Measures to Predict 5- and 10- Year Breast Cancer Risk
Kerlikowske, Karla; Gard, Charlotte C.; Sprague, Brian L.; Tice, Jeffrey A.; Miglioretti, Diana L.
2015-01-01
Background One measure of Breast Imaging Reporting and Data System (BI-RADS) breast density improves 5-year breast cancer risk prediction, but the value of sequential measures is unknown. We determined if two BI-RADS density measures improves the predictive accuracy of the Breast Cancer Surveillance Consortium 5-year risk model compared to one measure. Methods We included 722,654 women aged 35–74 years with two mammograms with BI-RADS density measures on average 1.8 years apart; 13,715 developed invasive breast cancer. We used Cox regression to estimate the relative hazards of breast cancer for age, race/ethnicity, family history of breast cancer, history of breast biopsy, and one or two density measures. We developed a risk prediction model by combining these estimates with 2000–2010 Surveillance, Epidemiology, and End Results incidence and 2010 vital statistics for competing risk of death. Results The two-measure density model had marginally greater discriminatory accuracy than the one-measure model (AUC=0.640 vs. 0.635). Of 18.6% of women (134,404/722,654) who decreased density categories, 15.4% (20,741/134,404) of women whose density decreased from heterogeneously or extremely dense to a lower density category with one other risk factor had a clinically meaningful increase in 5-year risk from <1.67% with the one-density model to ≥1.67% with the two-density model. Conclusion The two-density model has similar overall discrimination to the one-density model for predicting 5-year breast cancer risk and improves risk classification for women with risk factors and a decrease in density. Impact A two-density model should be considered for women whose density decreases when calculating breast cancer risk. PMID:25824444
One versus Two Breast Density Measures to Predict 5- and 10-Year Breast Cancer Risk.
Kerlikowske, Karla; Gard, Charlotte C; Sprague, Brian L; Tice, Jeffrey A; Miglioretti, Diana L
2015-06-01
One measure of Breast Imaging Reporting and Data System (BI-RADS) breast density improves 5-year breast cancer risk prediction, but the value of sequential measures is unknown. We determined whether two BI-RADS density measures improve the predictive accuracy of the Breast Cancer Surveillance Consortium 5-year risk model compared with one measure. We included 722,654 women of ages 35 to 74 years with two mammograms with BI-RADS density measures on average 1.8 years apart; 13,715 developed invasive breast cancer. We used Cox regression to estimate the relative hazards of breast cancer for age, race/ethnicity, family history of breast cancer, history of breast biopsy, and one or two density measures. We developed a risk prediction model by combining these estimates with 2000-2010 Surveillance, Epidemiology, and End Results incidence and 2010 vital statistics for competing risk of death. The two-measure density model had marginally greater discriminatory accuracy than the one-measure model (AUC, 0.640 vs. 0.635). Of 18.6% of women (134,404 of 722,654) who decreased density categories, 15.4% (20,741 of 134,404) of women whose density decreased from heterogeneously or extremely dense to a lower density category with one other risk factor had a clinically meaningful increase in 5-year risk from <1.67% with the one-density model to ≥1.67% with the two-density model. The two-density model has similar overall discrimination to the one-density model for predicting 5-year breast cancer risk and improves risk classification for women with risk factors and a decrease in density. A two-density model should be considered for women whose density decreases when calculating breast cancer risk. ©2015 American Association for Cancer Research.
A Simulation Model of the Planetary Boundary Layer at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Hwang, B.
1978-01-01
A simulation model which predicts the behavior of the Atmospheric Boundary Layer has been developed and coded. The model is partially evaluated by comparing it with laboratory measurements and the sounding measurements at Kennedy Space Center. The applicability of such an approach should prove quite widespread.
A space radiation shielding model of the Martian radiation environment experiment (MARIE)
NASA Technical Reports Server (NTRS)
Atwell, W.; Saganti, P.; Cucinotta, F. A.; Zeitlin, C. J.
2004-01-01
The 2001 Mars Odyssey spacecraft was launched towards Mars on April 7, 2001. Onboard the spacecraft is the Martian radiation environment experiment (MARIE), which is designed to measure the background radiation environment due to galactic cosmic rays (GCR) and solar protons in the 20-500 MeV/n energy range. We present an approach for developing a space radiation-shielding model of the spacecraft that includes the MARIE instrument in the current mapping phase orientation. A discussion is presented describing the development and methodology used to construct the shielding model. For a given GCR model environment, using the current MARIE shielding model and the high-energy particle transport codes, dose rate values are compared with MARIE measurements during the early mapping phase in Mars orbit. The results show good agreement between the model calculations and the MARIE measurements as presented for the March 2002 dataset. c2003 COSPAR. Published by Elsevier Ltd. All rights reserved.
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
Christopher, Micaela E.; Hulslander, Jacqueline; Byrne, Brian; Samuelsson, Stefan; Keenan, Janice M.; Pennington, Bruce; DeFries, John C.; Wadsworth, Sally J.; Willcutt, Erik; Olson, Richard K.
2012-01-01
We explored the etiology of individual differences in reading development from post-kindergarten to post-4th grade by analyzing data from 487 twin pairs tested in Colorado. Data from three reading measures and one spelling measure were fit to biometric latent growth curve models, allowing us to extend previous behavioral genetic studies of the etiology of early reading development at specific time points. We found primarily genetic influences on individual differences at post-1st grade for all measures. Genetic influences on variance in growth rates were also found, with evidence of small, nonsignificant, shared environmental influences for two measures. We discuss our results, including their implications for educational policy. PMID:24489459
Tackenberg, Oliver
2007-01-01
Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204
Benefit finding in response to general life stress: measurement and correlates
Cassidy, Tony; McLaughlin, Marian; Giles, Melanie
2014-01-01
Benefit finding herein defined as “the process of deriving positive growth from adversity” has become a key construct in the evolution of positive psychology, and research suggests that it may provide the basis for a resource model of stress and coping. However, measures of benefit finding have tended to be domain specific. The current study focused on developing a more generic multidimensional measure of benefit finding. A measure of benefit finding was developed and tested in 855 students (574 females and 281 males) aged between 18 and 40 years. A 28-item scale with six dimensions was produced and Confirmatory Factor Analysis (CFA) confirmed the scale structure. The model proposed that psychological and social resources would mediate the relationship between experienced stressors and benefit finding. Structural equation modelling with Analysis of Moment Structures (AMOS) shows that the model is a good fit for the data and psychological and social resources partially mediated the relationship. It is argued that psychological and social resources enable benefit finding in relation to life stress and provide a focus for the development of preventive interventions to improve positive health. PMID:25750781
Development of an Electronic Portfolio System Success Model: An Information Systems Approach
ERIC Educational Resources Information Center
Balaban, Igor; Mu, Enrique; Divjak, Blazenka
2013-01-01
This research has two main goals: to develop an instrument for assessing Electronic Portfolio (ePortfolio) success and to build a corresponding ePortfolio success model using DeLone and McLean's information systems success model as the theoretical framework. For this purpose, we developed an ePortfolio success measurement instrument and structural…
NASA Astrophysics Data System (ADS)
Monfared, Shabnam; Buttler, William; Schauer, Martin; Lalone, Brandon; Pack, Cora; Stevens, Gerald; Stone, Joseph; Special Technologies Laboratory Collaboration; Los Alamos National Laboratory Team
2014-03-01
Los Alamos National Laboratory is actively engaged in the study of material failure physics to support the hydrodynamic models development, where an important failure mechanism of explosively shocked metals causes mass ejection from the backside of a shocked surface with surface perturbations. Ejecta models are in development for this situation. Our past work has clearly shown that the total ejected mass and mass-velocity distribution sensitively link to the wavelength and amplitude of these perturbations. While we have had success developing ejecta mass and mass-velocity models, we need to better understand the size and size-velocity distributions of the ejected mass. To support size measurements we have developed a dynamic Mie scattering diagnostic based on a CW laser that permits measurement of the forward attenuation cross-section combined with a dynamic mass-density and mass-velocity distribution, as well as a measurement of the forward scattering cross-section at 12 angles (5- 32.5 degrees) in increments of 2.5 degrees. We compare size distribution followed from Beers law with attenuation cross-section and mass measurement to the dynamic size distribution determined from scattering cross-section alone. We report results from our first quality experiments.
Development of a Corrosion Sensor for AN Aircraft Vehicle Health Monitoring System
NASA Astrophysics Data System (ADS)
Scott, D. A.; Price, D. C.; Edwards, G. C.; Batten, A. B.; Kolmeder, J.; Muster, T. H.; Corrigan, P.; Cole, I. S.
2010-02-01
A Rayleigh-wave-based sensor has been developed to measure corrosion damage in aircraft. This sensor forms an important part of a corrosion monitoring system being developed for a major aircraft manufacturer. This system measures the corrosion rate at the location of its sensors, and through a model predicts the corrosion rates in nearby places on an aircraft into which no sensors can be placed. In order to calibrate this model, which yields corrosion rates rather than the accumulated effect, an absolute measure of the damage is required. In this paper the development of a surface wave sensor capable of measuring accumulated damage will be described in detail. This sensor allows the system to measure material loss due to corrosion regardless of the possible loss of historical corrosion rate data, and can provide, at any stage, a benchmark for the predictive model that would allow a good estimate of the accumulated corrosion damage in similar locations on an aircraft. This system may obviate the need for costly inspection of difficult-to-access places in aircraft, where presently the only way to check for corrosion is by periodic dismantling and reassembly.
Propagation-Loss Measurements and Modelling for Topographically Smooth and Rough Seabeds
1989-06-01
34 - UNLIMITED DISTRIBUTION I * National Defence Defense nationale Research and -urcau de recherche Development Branch et developpoment TECHNICAL...Desharnais DTI SEP 1419899 Defence - - Centre de Research 4 Recherches pour la Establishment Defense Atlantic Atlantique Canada8 89 9 14 018 r5Tn f" S...Defence Defense nationale Research and Bureau de recherche Development Branch et developpement PROPAGATION-LOSS MEASUREMENTS AND MODELLING FOR
Performance Modeling of an Airborne Raman Water Vapor Lidar
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.
2000-01-01
A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.
A model of urban rational growth based on grey prediction
NASA Astrophysics Data System (ADS)
Xiao, Wenjing
2017-04-01
Smart growth focuses on building sustainable cities, using compact development to prevent urban sprawl. This paper establishes a series of models to implement smart growth theories into city design. Besides two specific city design cases are shown. Firstly, We establishes Smart Growth Measure Model to measure the success of smart growth of a city. And we use Full Permutation Polygon Synthetic Indicator Method to calculate the Comprehensive Indicator (CI) which is used to measure the success of smart growth. Secondly, this paper uses the principle of smart growth to develop a new growth plan for two cities. We establish an optimization model to maximum CI value. The Particle Swarm Optimization (PSO) algorithm is used to solve the model. Combined with the calculation results and the specific circumstances of cities, we make their the smart growth plan respectively.
Development of an EUV Test Facility at the Marshall Space Flight Center
2011-08-22
Zemax model developed from beam size measurements that locates and determines the size of a copper gasket mounted to our pneumatic gate \\ahe at the...the observed spectra. Therefore, a Zemax model of the source, transmission grating and the Andor camera had to be developed. Two models were developed...see Figures 16, 17 and 18). The Zemax model including the NIST transmission data is in good agreement with the observed spectrum shown in Figure 18
FMCSA safety program effectiveness measurement: intervention model fiscal year 2009.
DOT National Transportation Integrated Search
2013-04-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the researcher, has developed an analytic model to measure the effectiveness of roadside inspections and traffic enforcements in terms of crashes avoided, injuries avoided, ...
Spiral model pilot project information model
NASA Technical Reports Server (NTRS)
1991-01-01
The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.
Development and Validation of a Polarimetric-MCScene 3D Atmospheric Radiation Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berk, Alexander; Hawes, Frederick; Fox, Marsha
2016-03-15
Polarimetric measurements can substantially enhance the ability of both spectrally resolved and single band imagery to detect the proliferation of weapons of mass destruction, providing data for locating and identifying facilities, materials, and processes of undeclared and proliferant nuclear weapons programs worldwide. Unfortunately, models do not exist that efficiently and accurately predict spectral polarized signatures for the materials of interest embedded in complex 3D environments. Having such a model would enable one to test hypotheses and optimize both the enhancement of scene contrast and the signal processing for spectral signature extraction. The Phase I set the groundwork for development ofmore » fully validated polarimetric spectral signature and scene simulation models. This has been accomplished 1. by (a) identifying and downloading state-of-the-art surface and atmospheric polarimetric data sources, (b) implementing tools for generating custom polarimetric data, and (c) identifying and requesting US Government funded field measurement data for use in validation; 2. by formulating an approach for upgrading the radiometric spectral signature model MODTRAN to generate polarimetric intensities through (a) ingestion of the polarimetric data, (b) polarimetric vectorization of existing MODTRAN modules, and (c) integration of a newly developed algorithm for computing polarimetric multiple scattering contributions; 3. by generating an initial polarimetric model that demonstrates calculation of polarimetric solar and lunar single scatter intensities arising from the interaction of incoming irradiances with molecules and aerosols; 4. by developing a design and implementation plan to (a) automate polarimetric scene construction and (b) efficiently sample polarimetric scattering and reflection events, for use in a to be developed polarimetric version of the existing first-principles synthetic scene simulation model, MCScene; and 5. by planning a validation field measurement program in collaboration with the Remote Sensing and Exploitation group at Sandia National Laboratories (SNL) in which data from their ongoing polarimetric field and laboratory measurement program will be shared and, to the extent allowed, tailored for model validation in exchange for model predictions under conditions and for geometries outside of their measurement domain.« less
Effects of vibration on inertial wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen
1994-01-01
Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.
Perriman, Noelyn; Davis, Deborah
2016-06-01
The objective of this systematic integrative review is to identify, summarise and communicate the findings of research relating to tools that measure maternal satisfaction with continuity of maternity care models. In so doing the most appropriate, reliable and valid tool that can be used to measure maternal satisfaction with continuity of maternity care will be determined. A systematic integrative review of published and unpublished literature was undertaken using selected databases. Research papers were included if they measured maternal satisfaction in a continuity model of maternity care, were published in English after 1999 and if they included (or made available) the instrument used to measure satisfaction. Six hundred and thirty two unique papers were identified and after applying the selection criteria, four papers were included in the review. Three of these originated in Australia and one in Canada. The primary focus of all papers was not on the development of a tool to measure maternal satisfaction but on the comparison of outcomes in different models of care. The instruments developed varied in terms of the degree to which they were tested for validity and reliability. Women's satisfaction with maternity services is an important measure of quality. Most satisfaction surveys in maternity appear to reflect fragmented models of care though continuity of care models are increasing in line with the evidence demonstrating their effectiveness. It is important that robust tools are developed for this context and that there is some consistency in the way this is measured and reported for the purposes of benchmarking and quality improvement. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Development of a traffic noise prediction model for an urban environment.
Sharma, Asheesh; Bodhe, G L; Schimak, G
2014-01-01
The objective of this study is to develop a traffic noise model under diverse traffic conditions in metropolitan cities. The model has been developed to calculate equivalent traffic noise based on four input variables i.e. equivalent traffic flow (Q e ), equivalent vehicle speed (S e ) and distance (d) and honking (h). The traffic data is collected and statistically analyzed in three different cases for 15-min during morning and evening rush hours. Case I represents congested traffic where equivalent vehicle speed is <30 km/h while case II represents free-flowing traffic where equivalent vehicle speed is >30 km/h and case III represents calm traffic where no honking is recorded. The noise model showed better results than earlier developed noise model for Indian traffic conditions. A comparative assessment between present and earlier developed noise model has also been presented in the study. The model is validated with measured noise levels and the correlation coefficients between measured and predicted noise levels were found to be 0.75, 0.83 and 0.86 for case I, II and III respectively. The noise model performs reasonably well under different traffic conditions and could be implemented for traffic noise prediction at other region as well.
MOBILE EMISSIONS ASSESSMENT SYSTEM FOR URBAN AND REGIONAL EVALUATION
A working research model for Atlanta, GA has been developed by Georgia Tech, and is called the Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE). The EPA Office of Research and Development has developed an additional implementation of the MEASURE res...
ERIC Educational Resources Information Center
Maulana, Ridwan; Helms-Lorenz, Michelle; van de Grift, Wim
2015-01-01
The present study examines the development of a measure tapping students' perceptions of (pre-service) teachers' teaching behaviour to explore the practical value of such a measure in teacher education and teacher professional development programs. From a sample of 1,635 students of 91 pre-service teachers teaching in secondary education in The…
Models for estimation and simulation of crown and canopy cover
John D. Shaw
2005-01-01
Crown width measurements collected during Forest Inventory and Analysis and Forest Health Monitoring surveys are being used to develop individual tree crown width models and plot-level canopy cover models for species and forest types in the Intermountain West. Several model applications are considered in the development process, including remote sensing of plot...
A Model for Measuring Effectiveness of an Online Course
ERIC Educational Resources Information Center
Mashaw, Bijan
2012-01-01
As a result of this research, a quantitative model and a procedure have been developed to create an online mentoring effectiveness index (EI). To develop the model, mentoring and teaching effectiveness are defined, and then the constructs and factors of effectiveness are identified. The model's construction is based on the theory that…
Boyd, Windy A.; Smith, Marjolein V.; Kissling, Grace E.; Rice, Julie R.; Snyder, Daniel W.; Portier, Christopher J.; Freedman, Jonathan H.
2009-01-01
Background The nematode Caenorhabditis elegans is being assessed as an alternative model organism as part of an interagency effort to develop better means to test potentially toxic substances. As part of this effort, assays that use the COPAS Biosort flow sorting technology to record optical measurements (time of flight (TOF) and extinction (EXT)) of individual nematodes under various chemical exposure conditions are being developed. A mathematical model has been created that uses Biosort data to quantitatively and qualitatively describe C. elegans growth, and link changes in growth rates to biological events. Chlorpyrifos, an organophosphate pesticide known to cause developmental delays and malformations in mammals, was used as a model toxicant to test the applicability of the growth model for in vivo toxicological testing. Methodology/Principal Findings L1 larval nematodes were exposed to a range of sub-lethal chlorpyrifos concentrations (0–75 µM) and measured every 12 h. In the absence of toxicant, C. elegans matured from L1s to gravid adults by 60 h. A mathematical model was used to estimate nematode size distributions at various times. Mathematical modeling of the distributions allowed the number of measured nematodes and log(EXT) and log(TOF) growth rates to be estimated. The model revealed three distinct growth phases. The points at which estimated growth rates changed (change points) were constant across the ten chlorpyrifos concentrations. Concentration response curves with respect to several model-estimated quantities (numbers of measured nematodes, mean log(TOF) and log(EXT), growth rates, and time to reach change points) showed a significant decrease in C. elegans growth with increasing chlorpyrifos concentration. Conclusions Effects of chlorpyrifos on C. elegans growth and development were mathematically modeled. Statistical tests confirmed a significant concentration effect on several model endpoints. This confirmed that chlorpyrifos affects C. elegans development in a concentration dependent manner. The most noticeable effect on growth occurred during early larval stages: L2 and L3. This study supports the utility of the C. elegans growth assay and mathematical modeling in determining the effects of potentially toxic substances in an alternative model organism using high-throughput technologies. PMID:19753116
Development of Multi-Layered Floating Floor for Cabin Noise Reduction
NASA Astrophysics Data System (ADS)
Song, Jee-Hun; Hong, Suk-Yoon; Kwon, Hyun-Wung
2017-12-01
Recently, regulations pertaining to the noise and vibration environment of ship cabins have been strengthened. In this paper, a numerical model is developed for multi-layered floating floor to predict the structure-borne noise in ship cabins. The theoretical model consists of multi-panel structures lined with high-density mineral wool. The predicted results for structure-borne noise when multi-layered floating floor is used are compared to the measure-ments made of a mock-up. A comparison of the predicted results and the experimental one shows that the developed model could be an effective tool for predicting structure-borne noise in ship cabins.
Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.
2013-01-01
Cheney Reservoir, located in south-central Kansas, is one of the primary water supplies for the city of Wichita, Kansas. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station in Cheney Reservoir since 2001; continuously measured physicochemical properties include specific conductance, pH, water temperature, dissolved oxygen, turbidity, fluorescence (wavelength range 650 to 700 nanometers; estimate of total chlorophyll), and reservoir elevation. Discrete water-quality samples were collected during 2001 through 2009 and analyzed for sediment, nutrients, taste-and-odor compounds, cyanotoxins, phytoplankton community composition, actinomycetes bacteria, and other water-quality measures. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physicochemical properties to compute concentrations of constituents that are not easily measured in real time. The water-quality information in this report is important to the city of Wichita because it allows quantification and characterization of potential constituents of concern in Cheney Reservoir. This report updates linear regression models published in 2006 that were based on data collected during 2001 through 2003. The update uses discrete and continuous data collected during May 2001 through December 2009. Updated models to compute dissolved solids, sodium, chloride, and suspended solids were similar to previously published models. However, several other updated models changed substantially from previously published models. In addition to updating relations that were previously developed, models also were developed for four new constituents, including magnesium, dissolved phosphorus, actinomycetes bacteria, and the cyanotoxin microcystin. In addition, a conversion factor of 0.74 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI model 6136 sensor at the Cheney Reservoir site. Because a high percentage of geosmin and microcystin data were below analytical detection thresholds (censored data), multiple logistic regression was used to develop models that best explained the probability of geosmin and microcystin concentrations exceeding relevant thresholds. The geosmin and microcystin models are particularly important because geosmin is a taste-and-odor compound and microcystin is a cyanotoxin.
What’s in a game? A systems approach to enhancing performance analysis in football
2017-01-01
Purpose Performance analysis (PA) in football is considered to be an integral component of understanding the requirements for optimal performance. Despite vast amounts of research in this area key gaps remain, including what comprises PA in football, and methods to minimise research-practitioner gaps. The aim of this study was to develop a model of the football match system in order to better describe and understand the components of football performance. Such a model could inform the design of new PA methods. Method Eight elite level football Subject Method Experts (SME’s) participated in two workshops to develop a systems model of the football match system. The model was developed using a first-of-its-kind application of Cognitive Work Analysis (CWA) in football. CWA has been used in many other non-sporting domains to analyse and understand complex systems. Result Using CWA, a model of the football match ‘system’ was developed. The model enabled identification of several PA measures not currently utilised, including communication between team members, adaptability of teams, playing at the appropriate tempo, as well as attacking and defending related measures. Conclusion The results indicate that football is characteristic of a complex sociotechnical system, and revealed potential new and unique PA measures regarded as important by SME’s, yet not currently measured. Importantly, these results have identified a gap between the current PA research and the information that is meaningful to football coaches and practitioners. PMID:28212392
Neurocognitive predictors of financial capacity in traumatic brain injury.
Martin, Roy C; Triebel, Kristen; Dreer, Laura E; Novack, Thomas A; Turner, Crystal; Marson, Daniel C
2012-01-01
To develop cognitive models of financial capacity (FC) in patients with traumatic brain injury (TBI). Longitudinal design. Inpatient brain injury rehabilitation unit. Twenty healthy controls, and 24 adults with moderate-to-severe TBI were assessed at baseline (30 days postinjury) and 6 months postinjury. The FC instrument (FCI) and a neuropsychological test battery. Univariate correlation and multiple regression procedures were employed to develop cognitive models of FCI performance in the TBI group, at baseline and 6-month time follow-up. Three cognitive predictor models of FC were developed. At baseline, measures of mental arithmetic/working memory and immediate verbal memory predicted baseline FCI performance (R = 0.72). At 6-month follow-up, measures of executive function and mental arithmetic/working memory predicted 6-month FCI performance (R = 0.79), and a third model found that these 2 measures at baseline predicted 6-month FCI performance (R = 0.71). Multiple cognitive functions are associated with initial impairment and partial recovery of FC in moderate-to-severe TBI patients. In particular, arithmetic, working memory, and executive function skills appear critical to recovery of FC in TBI. The study results represent an initial step toward developing a neurocognitive model of FC in patients with TBI.
NASA Astrophysics Data System (ADS)
Warrington, E. M.; Stocker, A. J.; Siddle, D. R.; Hallam, J.; Al-Behadili, H. A. H.; Zaalov, N. Y.; Honary, F.; Rogers, N. C.; Boteler, D. H.; Danskin, D. W.
2016-07-01
There is a need for improved techniques for nowcasting and forecasting (over several hours) HF propagation at northerly latitudes to support airlines operating over the increasingly popular trans-polar routes. In this paper the assimilation of real-time measurements into a propagation model developed by the authors is described, including ionosonde measurements and total electron content (TEC) measurements to define the main parameters of the ionosphere. The effects of D region absorption in the polar cap and auroral regions are integrated with the model through satellite measurements of the flux of energetic solar protons (>1 MeV) and the X-ray flux in the 0.1-0.8 nm band, and ground-based magnetometer measurements which form the Kp and Dst indices of geomagnetic activity. The model incorporates various features (e.g., convecting patches of enhanced plasma density) of the polar ionosphere that are, in particular, responsible for off-great circle propagation and lead to propagation at times and frequencies not expected from on-great circle propagation alone. The model development is supported by the collection of HF propagation measurements over several paths within the polar cap, crossing the auroral oval, and along the midlatitude trough.
Paez, Kathryn A.; Mallery, Coretta J.; Noel, HarmoniJoie; Pugliese, Christopher; McSorley, Veronica E.; Lucado, Jennifer L.; Ganachari, Deepa
2014-01-01
Understanding health insurance is central to affording and accessing health care in the United States. Efforts to support consumers in making wise purchasing decisions and using health insurance to their advantage would benefit from the development of a valid and reliable measure to assess health insurance literacy. This article reports on the development of the Health Insurance Literacy Measure (HILM), a self-assessment measure of consumers' ability to select and use private health insurance. The authors developed a conceptual model of health insurance literacy based on formative research and stakeholder guidance. Survey items were drafted using the conceptual model as a guide then tested in two rounds of cognitive interviews. After a field test with 828 respondents, exploratory factor analysis revealed two HILM scales, choosing health insurance and using health insurance, each of which is divided into a confidence subscale and likelihood of behavior subscale. Correlations between the HILM scales and an objective measure of health insurance knowledge and skills were positive and statistically significant which supports the validity of the measure. PMID:25315595
Paez, Kathryn A; Mallery, Coretta J; Noel, HarmoniJoie; Pugliese, Christopher; McSorley, Veronica E; Lucado, Jennifer L; Ganachari, Deepa
2014-01-01
Understanding health insurance is central to affording and accessing health care in the United States. Efforts to support consumers in making wise purchasing decisions and using health insurance to their advantage would benefit from the development of a valid and reliable measure to assess health insurance literacy. This article reports on the development of the Health Insurance Literacy Measure (HILM), a self-assessment measure of consumers' ability to select and use private health insurance. The authors developed a conceptual model of health insurance literacy based on formative research and stakeholder guidance. Survey items were drafted using the conceptual model as a guide then tested in two rounds of cognitive interviews. After a field test with 828 respondents, exploratory factor analysis revealed two HILM scales, choosing health insurance and using health insurance, each of which is divided into a confidence subscale and likelihood of behavior subscale. Correlations between the HILM scales and an objective measure of health insurance knowledge and skills were positive and statistically significant which supports the validity of the measure.
Ross, macdonald, and a theory for the dynamics and control of mosquito-transmitted pathogens.
Smith, David L; Battle, Katherine E; Hay, Simon I; Barker, Christopher M; Scott, Thomas W; McKenzie, F Ellis
2012-01-01
Ronald Ross and George Macdonald are credited with developing a mathematical model of mosquito-borne pathogen transmission. A systematic historical review suggests that several mathematicians and scientists contributed to development of the Ross-Macdonald model over a period of 70 years. Ross developed two different mathematical models, Macdonald a third, and various "Ross-Macdonald" mathematical models exist. Ross-Macdonald models are best defined by a consensus set of assumptions. The mathematical model is just one part of a theory for the dynamics and control of mosquito-transmitted pathogens that also includes epidemiological and entomological concepts and metrics for measuring transmission. All the basic elements of the theory had fallen into place by the end of the Global Malaria Eradication Programme (GMEP, 1955-1969) with the concept of vectorial capacity, methods for measuring key components of transmission by mosquitoes, and a quantitative theory of vector control. The Ross-Macdonald theory has since played a central role in development of research on mosquito-borne pathogen transmission and the development of strategies for mosquito-borne disease prevention.
Ross, Macdonald, and a Theory for the Dynamics and Control of Mosquito-Transmitted Pathogens
Smith, David L.; Battle, Katherine E.; Hay, Simon I.; Barker, Christopher M.; Scott, Thomas W.; McKenzie, F. Ellis
2012-01-01
Ronald Ross and George Macdonald are credited with developing a mathematical model of mosquito-borne pathogen transmission. A systematic historical review suggests that several mathematicians and scientists contributed to development of the Ross-Macdonald model over a period of 70 years. Ross developed two different mathematical models, Macdonald a third, and various “Ross-Macdonald” mathematical models exist. Ross-Macdonald models are best defined by a consensus set of assumptions. The mathematical model is just one part of a theory for the dynamics and control of mosquito-transmitted pathogens that also includes epidemiological and entomological concepts and metrics for measuring transmission. All the basic elements of the theory had fallen into place by the end of the Global Malaria Eradication Programme (GMEP, 1955–1969) with the concept of vectorial capacity, methods for measuring key components of transmission by mosquitoes, and a quantitative theory of vector control. The Ross-Macdonald theory has since played a central role in development of research on mosquito-borne pathogen transmission and the development of strategies for mosquito-borne disease prevention. PMID:22496640
4D Subject-Specific Inverse Modeling of the Chick Embryonic Heart Outflow Tract Hemodynamics
Goenezen, Sevan; Chivukula, Venkat Keshav; Midgett, Madeline; Phan, Ly; Rugonyi, Sandra
2015-01-01
Blood flow plays a critical role in regulating embryonic cardiac growth and development, with altered flow leading to congenital heart disease. Progress in the field, however, is hindered by a lack of quantification of hemodynamic conditions in the developing heart. In this study, we present a methodology to quantify blood flow dynamics in the embryonic heart using subject-specific computational fluid dynamics (CFD) models. While the methodology is general, we focused on a model of the chick embryonic heart outflow tract (OFT), which distally connects the heart to the arterial system, and is the region of origin of many congenital cardiac defects. Using structural and Doppler velocity data collected from optical coherence tomography (OCT), we generated 4D (3D + time) embryo-specific CFD models of the heart OFT. To replicate the blood flow dynamics over time during the cardiac cycle, we developed an iterative inverse-method optimization algorithm, which determines the CFD model boundary conditions such that differences between computed velocities and measured velocities at one point within the OFT lumen are minimized. Results from our developed CFD model agree with previously measured hemodynamics in the OFT. Further, computed velocities and measured velocities differ by less than 15% at locations that were not used in the optimization, validating the model. The presented methodology can be used in quantifications of embryonic cardiac hemodynamics under normal and altered blood flow conditions, enabling an in depth quantitative study of how blood flow influences cardiac development. PMID:26361767
Thermal barrier coating life prediction model development, phase 2
NASA Technical Reports Server (NTRS)
Meier, Susan Manning; Sheffler, Keith D.; Nissley, David M.
1991-01-01
The objective of this program was to generate a life prediction model for electron-beam-physical vapor deposited (EB-PVD) zirconia thermal barrier coating (TBC) on gas turbine engine components. Specific activities involved in development of the EB-PVD life prediction model included measurement of EB-PVD ceramic physical and mechanical properties and adherence strength, measurement of the thermally grown oxide (TGO) growth kinetics, generation of quantitative cyclic thermal spallation life data, and development of a spallation life prediction model. Life data useful for model development was obtained by exposing instrumented, EB-PVD ceramic coated cylindrical specimens in a jet fueled burner rig. Monotonic compression and tensile mechanical tests and physical property tests were conducted to obtain the EB-PVD ceramic behavior required for burner rig specimen analysis. As part of that effort, a nonlinear constitutive model was developed for the EB-PVD ceramic. Spallation failure of the EB-PVD TBC system consistently occurred at the TGO-metal interface. Calculated out-of-plane stresses were a small fraction of that required to statically fail the TGO. Thus, EB-PVD spallation was attributed to the interfacial cracking caused by in-plane TGO strains. Since TGO mechanical properties were not measured in this program, calculation of the burner rig specimen TGO in-plane strains was performed by using alumina properties. A life model based on maximum in-plane TGO tensile mechanical strain and TGO thickness correlated the burner rig specimen EB-PVD ceramic spallation lives within a factor of about plus or minus 2X.
Methodology Development of a Gas-Liquid Dynamic Flow Regime Transition Model
NASA Astrophysics Data System (ADS)
Doup, Benjamin Casey
Current reactor safety analysis codes, such as RELAP5, TRACE, and CATHARE, use flow regime maps or flow regime transition criteria that were developed for static fully-developed two-phase flows to choose interfacial transfer models that are necessary to solve the two-fluid model. The flow regime is therefore difficult to identify near the flow regime transitions, in developing two-phase flows, and in transient two-phase flows. Interfacial area transport equations were developed to more accurately predict the dynamic nature of two-phase flows. However, other model coefficients are still flow regime dependent. Therefore, an accurate prediction of the flow regime is still important. In the current work, the methodology for the development of a dynamic flow regime transition model that uses the void fraction and interfacial area concentration obtained by solving three-field the two-fluid model and two-group interfacial area transport equation is investigated. To develop this model, detailed local experimental data are obtained, the two-group interfacial area transport equations are revised, and a dynamic flow regime transition model is evaluated using a computational fluid dynamics model. Local experimental data is acquired for 63 different flow conditions in bubbly, cap-bubbly, slug, and churn-turbulent flow regimes. The measured parameters are the group-1 and group-2 bubble number frequency, void fraction, interfacial area concentration, and interfacial bubble velocities. The measurements are benchmarked by comparing the prediction of the superficial gas velocities, determined using the local measurements with those determined from volumetric flow rate measurements and the agreement is generally within +/-20%. The repeatability four-sensor probe construction process is within +/-10%. The repeatability of the measurement process is within +/-7%. The symmetry of the test section is examined and the average agreement is within +/-5.3% at z/D = 10 and +/-3.4% at z/D = 32. Revised source/sink terms for the two-group interfacial area transport equations are derived and fit to area-averaged experimental data to determine new model coefficients. The average agreement between this model and the experiment data for the void fraction and interfacial area concentration is 10.6% and 15.7%, respectively. This revised two-group interfacial area transport equation and the three-field two-fluid model are used to solve for the group-1 and group-2 interfacial area concentration and void fraction. These values and a dynamic flow regime transition model are used to classify the flow regimes. The flow regimes determined using this model are compared with the flow regimes based on the experimental data and on a flow regime map using Mishima and Ishii's (1984) transition criteria. The dynamic flow regime transition model is shown to predict the flow regimes dynamically and has improved the prediction of the flow regime over that using a flow regime map. Safety codes often employ the one-dimensional two-fluid model to model two-phase flows. The area-averaged relative velocity correlation necessary to close this model is derived from the drift flux model. The effects of the necessary assumptions used to derive this correlation are investigated using local measurements and these effects are found to have a limited impact on the prediction of the area-averaged relative velocity.
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
A Model and Measure of Mobile Communication Competence
ERIC Educational Resources Information Center
Bakke, Emil
2010-01-01
This article deals with two studies that develop a measure and model of mobile communication competence (MCC). The first study examines the dimensionality of the measure by conducting an exploratory factor analysis on 350 students at a large university in the midwestern United States. Results identified six constructs across 24 items: willingness…
Dynamic Measurement Modeling: Using Nonlinear Growth Models to Estimate Student Learning Capacity
ERIC Educational Resources Information Center
Dumas, Denis G.; McNeish, Daniel M.
2017-01-01
Single-timepoint educational measurement practices are capable of assessing student ability at the time of testing but are not designed to be informative of student capacity for developing in any particular academic domain, despite commonly being used in such a manner. For this reason, such measurement practice systematically underestimates the…
Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir
2010-01-01
A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…
ERIC Educational Resources Information Center
Steed, Teneka C.
2013-01-01
Evaluating the psychometric properties of a newly developed instrument is critical to understanding how well an instrument measures what it intends to measure, and ensuring proposed use and interpretation of questionnaire scores are valid. The current study uses Structural Equation Modeling (SEM) techniques to examine the factorial structure and…
Thematic Mapper Protoflight Model Line Spread Function
NASA Technical Reports Server (NTRS)
Schueler, C.
1984-01-01
The Thematic Mapper (TM) Protoflight Model Spatial Line Spread Function (LSF) was not measured before launch. Therefore, methodology are developed to characterize LSF with protoflight model optics and electronics measurements that were made before launch. Direct prelaunch LSF measurements that were made from the flight model TM verified the protoflight TM LSF simulation. Results for two selected protoflight TM channels are presented here. It is shown that LSF data for the other ninety-four channels could be generated in the same fashion.
Preliminary Evaluation of the DUSTRAN Modeling Suite for Modeling Atmospheric Chloride Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Philip; Tran, Tracy; Fritz, Bradley
2016-05-03
This study investigates the potential of DUSTRAN, a dust dispersion modeling system developed by Pacific Northwest National Laboratory, to model the transport of sea salt aerosols (SSA). Results from DUSTRAN simulations run with historical meteorological data were compared against privately-measured chloride data at the near coastal Maine Yankee Nuclear Power Plant (NPP) and the Environmental Protection Agency-measured CASTNET data from Acadia National Park (NP). The comparisons have provided both encouragement as to the practical value of DUSTRAN’s CALPUFF model and suggestions for further software development opportunities. All modeled concentrations were within one order of magnitude of those measured and amore » few test cases showed excellent agreement between modeled and measured concentrations. However, there is a lack of consistency in discrepancy which may be due to inaccurate extrapolation of meteorological data, underlying model physics, and the source term. Future research will refine the software to better capture physical phenomena. Overall, results indicate that with parameter refinement, DUSTRAN has the potential to simulate atmospheric chloride transport from known sources to inland sites for the purpose of determining the corrosion susceptibility of various structures, systems, and components at the site.« less
DOT National Transportation Integrated Search
2014-11-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center, has developed an analytic model to measure the effectiveness of roadside inspections and traffic enforcements in te...
DOT National Transportation Integrated Search
2016-02-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National : Transportation Systems Center, has developed an analytic model to measure the effectiveness of roadside : inspections and traffic enforcements i...
DOT National Transportation Integrated Search
2017-08-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center, has developed an analytic model to measure the effectiveness of roadside inspections and traffic enforcements in te...
DOT National Transportation Integrated Search
2015-06-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center, has developed an analytic model to measure the effectiveness of roadside inspections and traffic enforcements in te...
Anthropometric predictors of body fat as measured by hydrostatic weighing in Guatemalan adults.
Ramirez-Zea, Manuel; Torun, Benjamin; Martorell, Reynaldo; Stein, Aryeh D
2006-04-01
Most predictive equations currently used to assess percentage body fat (%BF) were derived from persons in industrialized Western societies. We developed equations to predict %BF from anthropometric measurements in rural and urban Guatemalan adults. Body density was measured in 123 women and 114 men by using hydrostatic weighing and simultaneous measurement of residual lung volume. Anthropometric measures included weight (in kg), height (in cm), 4 skinfold thicknesses [(STs) in mm], and 6 circumferences (in cm). Sex-specific multiple linear regression models were developed with %BF as the dependent variable and age, residence (rural or urban), and all anthropometric measures as independent variables (the "full" model). A "simplified" model was developed by using age, residence, weight, height, and arm, abdominal, and calf circumferences as independent variables. The preferred full models were %BF = -80.261 - (weight x 0.623) + (height x 0.214) + (tricipital ST x 0.379) + (abdominal ST x 0.202) + (abdominal circumference x 0.940) + (thigh circumference x 0.316); root mean square error (RMSE) = 3.0; and pure error (PE) = 3.4 for men and %BF = -15.471 + (tricipital ST x 0.332) + (subscapular ST x 0.154) + (abdominal ST x 0.119) + (hip circumference x 0.356); RMSE = 2.4; and PE = 2.9 for women. The preferred simplified models were %BF = -48.472 - (weight x 0.257) + (abdominal circumference x 0.989); RMSE = 3.8; and PE = 3.7 for men and %BF = 19.420 + (weight x 0.385) - (height x 0.215) + (abdominal circumference x 0.265); RMSE = 3.5; and PE = 3.5 for women. These equations performed better in this developing-country population than did previously published equations.
ERIC Educational Resources Information Center
Kayapinar, Ulas
2016-01-01
To date, studies on reflection seem to lack concern for in-service teacher development. This article proposes a new EFL reflective practitioner development model (RPDM) for an in-service program that is not only based on the principles of reflection, but that also measures teachers' reflective and self-efficacy development. Focusing on the…
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
NASA Astrophysics Data System (ADS)
Yang, Cheng; Fang, Yi; Zhao, Chao; Zhang, Xin
2018-06-01
A duct acoustics model is an essential component of an impedance eduction technique and its computation cost determines the impedance measurement efficiency. In this paper, a model is developed for the sound propagation through a lined duct carrying a uniform mean flow. In contrast to many existing models, the interface between the liner and the duct field is defined with a modified Ingard-Myers boundary condition that takes account of the effect of the boundary layer above the liner. A mode-matching method is used to couple the unlined and lined duct segments for the model development. For the lined duct segment, the eigenvalue problem resulted from the modified boundary condition is solved by an integration scheme which, on the one hand, allows the lined duct modes to be computed in an efficient manner, and on the other hand, orders the modes automatically. The duct acoustics model developed from the solved lined duct modes is shown to converge more rapidly than the one developed from the rigid-walled duct modes. Validation against the experiment data in the literature shows that the proposed model is able to predict more accurately the liner performance measured by the two-source method. This, however, cannot be made by a duct acoustics model associated with the conventional Ingard-Myers boundary condition. The proposed model has the potential to be integrated into an impedance eduction technique for more reliable liner measurement.
NASA Technical Reports Server (NTRS)
Colborn, B. L.; Armstong, T. W.
1993-01-01
A three-dimensional geometry and mass model of the Long Duration Exposure Facility (LDEF) spacecraft and experiment trays was developed for use in predictions and data interpretation related to ionizing radiation measurements. The modeling approach, level of detail incorporated, example models for specific experiments and radiation dosimeters, and example applications of the model are described.
Developing Model-Making and Model-Breaking Skills Using Direct Measurement Video-Based Activities
ERIC Educational Resources Information Center
Vonk, Matthew; Bohacek, Peter; Militello, Cheryl; Iverson, Ellen
2017-01-01
This study focuses on student development of two important laboratory skills in the context of introductory college-level physics. The first skill, which we call model making, is the ability to analyze a phenomenon in a way that produces a quantitative multimodal model. The second skill, which we call model breaking, is the ability to critically…
Procurement performance measurement system in the health care industry.
Kumar, Arun; Ozdamar, Linet; Ng, Chai Peng
2005-01-01
The rising operating cost of providing healthcare is of concern to health care providers. As such, measurement of procurement performance will enable competitive advantage and provide a framework for continuous improvement. The objective of this paper is to develop a procurement performance measurement system. The paper reviews the existing literature in procurement performance measurement to identify the key areas of purchasing performance. By studying the three components in the supply chain collectively with the resources, procedures and output, a model is been developed. Additionally, a balanced scorecard is proposed by establishing a set of generic measures and six perspectives. A case study conducted at the Singapore Hospital applies the conceptual model to describe the purchasing department and the activities within and outside the department. The results indicate that the material management department has already made a good start in measuring the procurement process through the implementation of the balanced scorecard. There are many data that are collected but not properly collated and utilized. Areas lacking measurement include cycle time of delivery, order processing time, effectiveness, efficiency and reliability. Though a lot of hard work was involved, the advantages of establishing a measurement system outweigh the costs and efforts involved in its implementation. Results of balanced scorecard measurements provide decision-makers with critical information on efficiency and effectiveness of the purchasing department's work. The measurement model developed could be used for any hospital procurement system.
ERIC Educational Resources Information Center
Halbesleben, Jonathon R. B.; Wheeler, Anthony R.
2009-01-01
Although management scholars have provided a variety of metaphors to describe the role of students in management courses, researchers have yet to explore students' identification with the models and how they are linked to educational outcomes. This article develops a measurement tool for students' identification with business education models and…
Conducting field studies for testing pesticide leaching models
Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.
1990-01-01
A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.
Pickles, Andrew; Lord, Catherine
2015-01-01
Background: Motor milestones such as the onset of walking are important developmental markers, not only for later motor skills but also for more widespread social‐cognitive development. The aim of the current study was to test whether gross motor abilities, specifically the onset of walking, predicted the subsequent rate of language development in a large cohort of children with autism spectrum disorder (ASD). Methods: We ran growth curve models for expressive and receptive language measured at 2, 3, 5 and 9 years in 209 autistic children. Measures of gross motor, visual reception and autism symptoms were collected at the 2 year visit. In Model 1, walking onset was included as a predictor of the slope of language development. Model 2 included a measure of non‐verbal IQ and autism symptom severity as covariates. The final model, Model 3, additionally covaried for gross motor ability. Results: In the first model, parent‐reported age of walking onset significantly predicted the subsequent rate of language development although the relationship became non‐significant when gross motor skill, non‐verbal ability and autism severity scores were included (Models 2 & 3). Gross motor score, however, did remain a significant predictor of both expressive and receptive language development. Conclusions: Taken together, the model results provide some evidence that early motor abilities in young children with ASD can have longitudinal cross‐domain influences, potentially contributing, in part, to the linguistic difficulties that characterise ASD. Autism Res 2016, 9: 993–1001. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:26692550
Bedford, Rachael; Pickles, Andrew; Lord, Catherine
2016-09-01
Motor milestones such as the onset of walking are important developmental markers, not only for later motor skills but also for more widespread social-cognitive development. The aim of the current study was to test whether gross motor abilities, specifically the onset of walking, predicted the subsequent rate of language development in a large cohort of children with autism spectrum disorder (ASD). We ran growth curve models for expressive and receptive language measured at 2, 3, 5 and 9 years in 209 autistic children. Measures of gross motor, visual reception and autism symptoms were collected at the 2 year visit. In Model 1, walking onset was included as a predictor of the slope of language development. Model 2 included a measure of non-verbal IQ and autism symptom severity as covariates. The final model, Model 3, additionally covaried for gross motor ability. In the first model, parent-reported age of walking onset significantly predicted the subsequent rate of language development although the relationship became non-significant when gross motor skill, non-verbal ability and autism severity scores were included (Models 2 & 3). Gross motor score, however, did remain a significant predictor of both expressive and receptive language development. Taken together, the model results provide some evidence that early motor abilities in young children with ASD can have longitudinal cross-domain influences, potentially contributing, in part, to the linguistic difficulties that characterise ASD. Autism Res 2016, 9: 993-1001. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.
Magnetic Measurements of the First Nb 3Sn Model Quadrupole (MQXFS) for the High-Luminosity LHC
DiMarco, J.; Ambrosio, G.; Chlachidze, G.; ...
2016-12-12
The US LHC Accelerator Research Program (LARP) and CERN are developing high-gradient Nb 3Sn magnets for the High Luminosity LHC interaction regions. Magnetic measurements of the first 1.5 m long, 150 mm aperture model quadrupole, MQXFS1, were performed during magnet assembly at LBNL, as well as during cryogenic testing at Fermilab’s Vertical Magnet Test Facility. This paper reports on the results of these magnetic characterization measurements, as well as on the performance of new probes developed for the tests.
NASA Astrophysics Data System (ADS)
Tercero, Carlos; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Takahashi, Ikuo
2011-10-01
There is a need to develop quantitative evaluation for simulator based training in medicine. Photoelastic stress analysis can be used in human tissue modeling materials; this enables the development of simulators that measure respect for tissue. For applying this to endovascular surgery, first we present a model of saccular aneurism where stress variation during micro-coils deployment is measured, and then relying on a bi-planar vision system we measure a catheter trajectory and compare it to a reference trajectory considering respect for tissue. New photoelastic tissue modeling materials will expand the applications of this technology to other medical training domains.
Family Environment and Cognitive Development: Twelve Analytic Models
ERIC Educational Resources Information Center
Walberg, Herbert J.; Marjoribanks, Kevin
1976-01-01
The review indicates that refined measures of the family environment and the use of complex statistical models increase the understanding of the relationships between socioeconomic status, sibling variables, family environment, and cognitive development. (RC)
Studies of the Codeposition of Cobalt Hydroxide and Nickel Hydroxide
NASA Technical Reports Server (NTRS)
Ho, C. H.; Murthy, M.; VanZee, J. W.
1997-01-01
Topics considered include: chemistry, experimental measurements, planar film model development, impregnation model development, results and conclusion. Also included: effect of cobalt concentration on deposition/loading; effect of current density on loading distribution.
Global Bedload Flux Modeling and Analysis in Large Rivers
NASA Astrophysics Data System (ADS)
Islam, M. T.; Cohen, S.; Syvitski, J. P.
2017-12-01
Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.
Test of Hadronic Interaction Models with the KASCADE Hadron Calorimeter
NASA Astrophysics Data System (ADS)
Milke, J.; KASCADE Collaboration
The interpretation of extensive air shower (EAS) measurements often requires the comparison with EAS simulations based on high-energy hadronic interaction models. These interaction models have to extrapolate into kinematical regions and energy ranges beyond the limit of present accelerators. Therefore, it is necessary to test whether these models are able to describe the EAS development in a consistent way. By measuring simultaneously the hadronic, electromagnetic, and muonic part of an EAS the experiment KASCADE offers best facilities for checking the models. For the EAS simulations the program CORSIKA with several hadronic event generators implemented is used. Different hadronic observables, e.g. hadron number, energy spectrum, lateral distribution, are investigated, as well as their correlations with the electromagnetic and muonic shower size. By comparing measurements and simulations the consistency of the description of the EAS development is checked. First results with the new interaction model NEXUS and the version II.5 of the model DPMJET, recently included in CORSIKA, are presented and compared with QGSJET simulations.
NASA Astrophysics Data System (ADS)
Skene, Katherine J.; Gent, Janneane F.; McKay, Lisa A.; Belanger, Kathleen; Leaderer, Brian P.; Holford, Theodore R.
2010-12-01
An integrated exposure model was developed that estimates nitrogen dioxide (NO 2) concentration at residences using geographic information systems (GIS) and variables derived within residential buffers representing traffic volume and landscape characteristics including land use, population density and elevation. Multiple measurements of NO 2 taken outside of 985 residences in Connecticut were used to develop the model. A second set of 120 outdoor NO 2 measurements as well as cross-validation were used to validate the model. The model suggests that approximately 67% of the variation in NO 2 levels can be explained by: traffic and land use primarily within 2 km of a residence; population density; elevation; and time of year. Potential benefits of this model for health effects research include improved spatial estimations of traffic-related pollutant exposure and reduced need for extensive pollutant measurements. The model, which could be calibrated and applied in areas other than Connecticut, has importance as a tool for exposure estimation in epidemiological studies of traffic-related air pollution.
The Development of a New Model of Solar EUV Irradiance Variability
NASA Technical Reports Server (NTRS)
Warren, Harry; Wagner, William J. (Technical Monitor)
2002-01-01
The goal of this research project is the development of a new model of solar EUV (Extreme Ultraviolet) irradiance variability. The model is based on combining differential emission measure distributions derived from spatially and spectrally resolved observations of active regions, coronal holes, and the quiet Sun with full-disk solar images. An initial version of this model was developed with earlier funding from NASA. The new version of the model developed with this research grant will incorporate observations from SoHO as well as updated compilations of atomic data. These improvements will make the model calculations much more accurate.
Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas
2016-05-01
We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.
1980-07-01
temperature has turbine stacks located in a region of high ambient turbu- - lence [Hoult (1975) and Egan (1975)]. I) I+ 110 4.5.4 Event Modeling Several...the aircraft industry in developing the gas turbine engine technology of the present era. Dispersion measurements permit determination of a power law...The ESEERCO Model for the Prediction of Plume Rise and Dispersion from Gas Turbine Engines. Air Pollution Control Association 68th Annual Meeting
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.
2007-01-01
The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.
NASA Technical Reports Server (NTRS)
Mahoney, M. J.; Ismail, S.; Browell, E. V.; Ferrare, R. A.; Kooi, S. A.; Brasseur, L.; Notari, A.; Petway, L.; Brackett, V.; Clayton, M.;
2002-01-01
LASE measures high resolution moisture, aerosol, and cloud distributions not available from conventional observations. LASE water vapor measurements were compared with dropsondes to evaluate their accuracy. LASE water vapor measurements were used to assess the capability of hurricane models to improve their track accuracy by 100 km on 3 day forecasts using Florida State University models.
A comparative study of the constitutive models for silicon carbide
NASA Astrophysics Data System (ADS)
Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra
2001-06-01
Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.
Measuring health care process quality with software quality measures.
Yildiz, Ozkan; Demirörs, Onur
2012-01-01
Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.
Trends in measurement models and methods in understanding occupational health psychology.
Tetrick, Lois E
2017-07-01
Measurement of occupational health psychology constructs is the cornerstone to developing our understanding of occupational health and safety. It also is critical in the design, evaluation, and implementation of interventions to improve employees and organizations well-being. The purpose of this article is a brief review of the current state of measurement theory and practice in occupational health psychology. Also included are a discussion of development of newer measurement models and methods, which are in use in other disciplines of psychology, but have not been incorporated into the occupational health psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Meijster, Tim; Warren, Nick; Heederik, Dick; Tielemans, Erik
2009-02-01
Recently a dynamic population model was developed that simulates a population of bakery workers longitudinally through time and tracks the development of work-related sensitisation and respiratory symptoms in each worker. Input for this model comes from cross-sectional and longitudinal epidemiological studies which allowed estimation of exposure response relationships and disease transition probabilities This model allows us to study the development of diseases and transitions between disease states over time in relation to determinants of disease including flour dust and/or allergen exposure. Furthermore it enables more realistic modelling of the health impact of different intervention strategies at the workplace (e.g. changes in exposure may take several years to impact on ill-health and often occur as a gradual trend). A large dataset of individual full-shift exposure measurements and real-time exposure measurements were used to obtain detailed insight into the effectiveness of control measures and other determinants of exposure. Given this information a population wide reduction of the median exposure with 50% was evaluated in this paper.
Matrix population models as a tool in development of habitat models
Gregory D. Hayward; David B. McDonald
1997-01-01
Building sophisticated habitat models for conservation of owls must stem from an understanding of the relative quality of habitats at a variety of geographic and temporal scales. Developing these models requires knowing the relationship between habitat conditions and owl performance. What measure should be used to compare the quality of habitats? Matrix population...
DOT National Transportation Integrated Search
2017-04-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...
A space radiation shielding model of the Martian radiationenvironment experiment (MARIE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atwell, William; Saganti, Premkumar; Cucinotta, Francis A.
2004-12-01
The 2001 Mars Odyssey spacecraft was launched towards Mars on April 7, 2001. On board the spacecraft is the Martian radiation environment experiment (MARIE), which is designed to measure the background radiation environment due to galactic cosmic rays (GCR) and solar protons in the 20 500 MeV/n energy range. We present an approach for developing a space radiation-shielding model of the spacecraft that includes the MARIE instrument in the current mapping phase orientation. A discussion is presented describing the development and methodology used to construct the shielding model. For a given GCR model environment, using the current MARIE shielding modelmore » and the high-energy particle transport codes, dose rate values are compared with MARIE measurements during the early mapping phase in Mars orbit. The results show good agreement between the model calculations and the MARIE measurements as presented for the March 2002 dataset.« less
NASA Technical Reports Server (NTRS)
Carlson, Toby N.
1988-01-01
Using model development, image analysis and micrometeorological measurements, the object is to push beyond the present limitations of using the infrared temperature method for remotely determining surface energy fluxes and soil moisture over vegetation. Model development consists of three aspects: (1) a more complex vegetation formulation which is more flexible and realistic; (2) a method for modeling the fluxes over patchy vegetation cover; and (3) a method for inferring a two-layer soil vertical moisture gradient from analyses of horizontal variations in surface temperatures. HAPEX and FIFE satellite data will be used along with aircraft thermal infrared and solar images as input for the models. To test the models, moisture availability and bulk canopy resistances will be calculated from data collected locally at the Rock Springs experimental field site and, eventually, from the FIFE project.
The Measure of Adolescent Heterosocial Competence: Development and Initial Validation
ERIC Educational Resources Information Center
Grover, Rachel L.; Nangle, Douglas W.; Zeff, Karen R.
2005-01-01
We developed and began construct validation of the Measure of Adolescent Heterosocial Competence (MAHC), a self-report instrument assessing the ability to negotiate effectively a range of challenging other-sex social interactions. Development followed the Goldfried and D'Zurilla (1969) behavioral-analytic model for assessing competence.…
Teasing Out Cognitive Development from Cognitive Style: A Training Study.
ERIC Educational Resources Information Center
Globerson, Tamar; And Others
1985-01-01
Tested whether or not cognitive development (as measured by mental capacity) and cognitive style (as measured by field-dependence/independence) are different dimensions. Results are discussed with regard to Pascual-Leone's model of cognitive development, relevance to stylistic dimension of reflection/impulsivity, and educational implications.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Damao; Wang, Zhien; Heymsfield, Andrew J.
Measurement of ice number concentration in clouds is important but still challenging. Stratiform mixed-phase clouds (SMCs) provide a simple scenario for retrieving ice number concentration from remote sensing measurements. The simple ice generation and growth pattern in SMCs offers opportunities to use cloud radar reflectivity (Ze) measurements and other cloud properties to infer ice number concentration quantitatively. To understand the strong temperature dependency of ice habit and growth rate quantitatively, we develop a 1-D ice growth model to calculate the ice diffusional growth along its falling trajectory in SMCs. The radar reflectivity and fall velocity profiles of ice crystals calculatedmore » from the 1-D ice growth model are evaluated with the Atmospheric Radiation Measurements (ARM) Climate Research Facility (ACRF) ground-based high vertical resolution radar measurements. Combining Ze measurements and 1-D ice growth model simulations, we develop a method to retrieve the ice number concentrations in SMCs at given cloud top temperature (CTT) and liquid water path (LWP). The retrieved ice concentrations in SMCs are evaluated with in situ measurements and with a three-dimensional cloud-resolving model simulation with a bin microphysical scheme. These comparisons show that the retrieved ice number concentrations are within an uncertainty of a factor of 2, statistically.« less
DOT National Transportation Integrated Search
1988-03-01
Users of this manual are expected to be researchers who are attempting to develop models that can be used to predict occurrence of pedestrian accidents in a particular city. The manual presents guidelines in the development of such models. A group-...
DOT National Transportation Integrated Search
2017-02-01
This project covered the development and calibration of a Dynamic Traffic Assignment (DTA) model and explained the procedures, constraints, and considerations for usage of this model for the Reno-Sparks area roadway network in Northern Nevada. A lite...
Applications of the Wilkinson Model of Writing Maturity to College Writing.
ERIC Educational Resources Information Center
Sternglass, Marilyn
1982-01-01
Examines the four-category model developed by Andrew Wilkinson at the University of Essex (England) to assess growth in writing maturity. The four measures of development are stylistic, affective, cognitive, and moral. Each has several subcategories. Includes college student essays to illustrate the model. (HTH)
Study of Three-Dimensional Pressure-Driven Turbulent Boundary Layer
1990-08-31
614)-)) the flow development rate should be comparable with that of the flows used in practice. In the rest of the Chapter, first the governing...to develop these models will be briefly discussed. The available turbulence models used INTRODUCTION 2 for the mathematically closure of the of...equations, assumptions made for each model and the quantities to be measured for the further development of these models are also going to be pointed out
Dynamic response tests of inertial and optical wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.; Burner, A. W.; Tripp, J. S.; Tcheng, P.; Finley, T. D.; Popernack, T. G., Jr.
1995-01-01
Results are presented for an experimental study of the response of inertial and optical wind-tunnel model attitude measurement systems in a wind-off simulated dynamic environment. This study is part of an ongoing activity at the NASA Langley Research Center to develop high accuracy, advanced model attitude measurement systems that can be used in a dynamic wind-tunnel environment. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration which results in a model attitude measurement bias error. Significant bias errors in model attitude measurement were found for the measurement using the inertial device during wind-off dynamic testing of a model system. The amount of bias present during wind-tunnel tests will depend on the amplitudes of the model dynamic response and the modal characteristics of the model system. Correction models are presented that predict the vibration-induced bias errors to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment. The optical system results were uncorrupted by model vibration in the laboratory setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
2018-03-20
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Novel Methods for Measuring LiDAR
NASA Astrophysics Data System (ADS)
Ayrey, E.; Hayes, D. J.; Fraver, S.; Weiskittel, A.; Cook, B.; Kershaw, J.
2017-12-01
The estimation of forest biometrics from airborne LiDAR data has become invaluable for quantifying forest carbon stocks, forest and wildlife ecology research, and sustainable forest management. The area-based approach is arguably the most common method for developing enhanced forest inventories from LiDAR. It involves taking a series of vertical height measurements of the point cloud, then using those measurements with field measured data to develop predictive models. Unfortunately, there is considerable variation in methodology for collecting point cloud data, which can vary in pulse density, seasonality, canopy penetrability, and instrument specifications. Today there exists a wealth of public LiDAR data, however the variation in acquisition parameters makes forest inventory prediction by traditional means unreliable across the different datasets. The goal of this project is to test a series of novel point cloud measurements developed along a conceptual spectrum of human interpretability, and then to use the best measurements to develop regional enhanced forest inventories on Northern New England's and Atlantic Canada's public LiDAR. Similarly to a field-based inventory, individual tree crowns are being segmented, and summary statistics are being used as covariates. Established competition and structural indices are being generated using each tree's relationship to one another, whilst existing allometric equations are being used to estimate diameter and biomass of each tree measured in the LiDAR. Novel metrics measuring light interception, clusteredness, and rugosity are also being measured as predictors. On the other end of the human interpretability spectrum, convolutional neural networks are being employed to directly measure both the canopy height model, and the point clouds by scanning each using two and three dimensional kernals trained to identify features useful for predicting biological attributes such as biomass. Predictive models will be trained and tested against one another using 28 different sites and over 42 different LiDAR acquisitions. The optimal model will then be used to generate regional wall-to-wall forest inventories at a 10 m resolution.
Sevenster, M; Buurman, J; Liu, P; Peters, J F; Chang, P J
2015-01-01
Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.
System Identification Methods for Aircraft Flight Control Development and Validation
DOT National Transportation Integrated Search
1995-10-01
System-identification methods compose a mathematical model, or series of models, : from measurements of inputs and outputs of dynamic systems. This paper : discusses the use of frequency-domain system-identification methods for the : development and ...
Development and initial validation of the Impression Motivation in Sport Questionnaire-Team.
Payne, Simon Mark; Hudson, Joanne; Akehurst, Sally; Ntoumanis, Nikos
2013-06-01
Impression motivation is an important individual difference variable that has been under-researched in sport psychology, partly due to having no appropriate measure. This study was conducted to design a measure of impression motivation in team-sport athletes. Construct validity checks decreased the initial pool of items, factor analysis (n = 310) revealed the structure of the newly developed scale, and exploratory structural equation modeling procedures (n = 406) resulted in a modified scale that retained theoretical integrity and psychometric parsimony. This process produced a 15-item, 4-factor model; the Impression Motivation in Sport Questionnaire-Team (IMSQ-T) is forwarded as a valid measure of the respondent's dispositional strength of motivation to use self-presentation in striving for four distinct interpersonal objectives: self-development, social identity development, avoidance of negative outcomes, and avoidance of damaging impressions. The availability of this measure has contributed to theoretical development, will facilitate research, and offers a tool for use in applied settings.
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu; Kim, Jong Oh
2016-05-15
Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate themore » model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously investigated for transit dosimetry.« less
ERIC Educational Resources Information Center
Connelly, Edward A.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is documented in this report. The ultimate application of the research is to provide methods for automatically measuring pilot performance in a flight simulator or from recorded in-flight data. An efficient method of…
A Theory of the Measurement of Knowledge Content, Access, and Learning.
ERIC Educational Resources Information Center
Pirolli, Peter; Wilson, Mark
1998-01-01
An approach to the measurement of knowledge content, knowledge access, and knowledge learning is developed. First a theoretical view of cognition is described, and then a class of measurement models, based on Rasch modeling, is presented. Knowledge access and content are viewed as determining the observable actions selected by an agent to achieve…
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Macrae, John D.; Hammond, Vincent H.; Kranbuehl, David E.; Hart, Sean M.; Hasko, Gregory H.; Markus, Alan M.
1993-01-01
A two-dimensional model of the resin transfer molding (RTM) process was developed which can be used to simulate the infiltration of resin into an anisotropic fibrous preform. Frequency dependent electromagnetic sensing (FDEMS) has been developed for in situ monitoring of the RTM process. Flow visualization tests were performed to obtain data which can be used to verify the sensor measurements and the model predictions. Results of the tests showed that FDEMS can accurately detect the position of the resin flow-front during mold filling, and that the model predicted flow-front patterns agreed well with the measured flow-front patterns.
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
Six-axis orthodontic force and moment sensing system for dentist technique training.
Midorikawa, Yoshiyuki; Takemura, Hiroshi; Mizoguchi, Hiroshi; Soga, Kohei; Kamimura, Masao; Suga, Kazuhiro; Wei-Jen Lai; Kanno, Zuisei; Uo, Motohiro
2016-08-01
The purpose of this study is to develop a sensing system device that measures three-axis orthodontic forces and three-axis orthodontic moments for dentist training. The developed sensing system is composed of six-axis force sensors, action sticks, sliders, and tooth models. The developed system also simulates various types of tooth row shape patterns in orthodontic operations, and measures a 14 × 6 axis orthodontic force and moment from tooth models simultaneously. The average force and moment error per loaded axis were 2.06 % and 2.00 %, respectively.
Global and regional ecosystem modeling: comparison of model outputs and field measurements
NASA Astrophysics Data System (ADS)
Olson, R. J.; Hibbard, K.
2003-04-01
The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725
Development of a Measurement Instrument to Assess Students' Electrolyte Conceptual Understanding
ERIC Educational Resources Information Center
Lu, Shanshan; Bi, Hualin
2016-01-01
To assess students' conceptual understanding levels and diagnose alternative frameworks of the electrolyte concept, a measurement instrument was developed using the Rasch model. This paper reports the use of the measurement instrument to assess 559 students from grade 10 to grade 12 in two cities. The results provided both diagnostic and summative…
Acoustic evaluation of standing trees : recent research development
Xiping Wang; Robert J. Ross; Peter Carter
2005-01-01
This paper presents some research results from recent trial studies on measuring acoustic velocities on standing trees of five softwood species. The relationships between tree velocities measured by time of flight method and log velocities measured by resonance method were evaluated. Theoretical and empirical models were developed for adjusting observed tree velocity...
Model-Based Reasoning in Upper-division Lab Courses
NASA Astrophysics Data System (ADS)
Lewandowski, Heather
2015-05-01
Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.
Multidirectional mobilities: Advanced measurement techniques and applications
NASA Astrophysics Data System (ADS)
Ivarsson, Lars Holger
Today high noise-and-vibration comfort has become a quality sign of products in sectors such as the automotive industry, aircraft, components, households and manufacturing. Consequently, already in the design phase of products, tools are required to predict the final vibration and noise levels. These tools have to be applicable over a wide frequency range with sufficient accuracy. During recent decades a variety of tools have been developed such as transfer path analysis (TPA), input force estimation, substructuring, coupling by frequency response functions (FRF) and hybrid modelling. While these methods have a well-developed theoretical basis, their application combined with experimental data often suffers from a lack of information concerning rotational DOFs. In order to measure response in all 6 DOFs (including rotation), a sensor has been developed, whose special features are discussed in the thesis. This transducer simplifies the response measurements, although in practice the excitation of moments appears to be more difficult. Several excitation techniques have been developed to enable measurement of multidirectional mobilities. For rapid and simple measurement of the loaded mobility matrix, a MIMO (Multiple Input Multiple Output) technique is used. The technique has been tested and validated on several structures of different complexity. A second technique for measuring the loaded 6-by-6 mobility matrix has been developed. This technique employs a model of the excitation set-up, and with this model the mobility matrix is determined from sequential measurements. Measurements on ``real'' structures show that both techniques give results of similar quality, and both are recommended for practical use. As a further step, a technique for measuring the unloaded mobilities is presented. It employs the measured loaded mobility matrix in order to calculate compensation forces and moments, which are later applied in order to compensate for the loading of the measurement equipment. The developed measurement techniques have been used in a hybrid coupling of a plate-and-beam structure to study different aspects of the coupling technique. Results show that RDOFs are crucial and have to be included in this case. The importance of stiffness residuals when mobilities are estimated from modal superposition is demonstrated. Finally it is shown that proper curve fitting can correct errors from inconsistently measured data.
NASA Technical Reports Server (NTRS)
Myint, S. W.; Walker, N. D.
2002-01-01
The ability to quantify suspended sediment concentrations accurately over both time and space using satellite data has been a goal of many environmental researchers over the past few decades This study utilizes data acquired by the NOAA Advanced Very High Resolution Radiometer (AVHRR) and the Orbview-2 Sea-viewing wide field-of-view (SeaWiFS) ocean colour sensor, coupled with field measurements to develop statistical models for the estimation of near-surface suspended sediment and suspended solids "Ground truth" water samples were obtained via helicopter, small boat and automatic water sampler within a few hours of satellite overpasses The NOAA AVHRR atmospheric correction was modified for the high levels of turbidity along the Louisiana coast. Models were developed based on the field measurements and reflectance/radiance measurements in the visible and near infrared Channels of NOAA-14 and Orbview-2 SeaWiFS. The best models for predicting surface suspended sediment concentrations were obtained with a NOAA AVHRR Channel 1 (580-680nm) cubic model, Channel 2 (725-1100 nm) linear mod$ and SeaWiFs Channel 6 (660-68Onm) power modeL The suspended sediment models developed using SeaWiFS Channel 5 (545-565 nm) were inferior, a result that we attribute mainly to the atmospheric correction technique, the shallow depth of the water samples and absorption effects from non-sediment water constituents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kropka, Jamie Michael; Stavig, Mark E.; Arechederra, Gabe Kenneth
Develop an understanding of the evolution of glassy polymer mechanical response during aging and the mechanisms associated with that evolution. That understanding will be used to develop constitutive models to assess the impact of stress evolution in encapsulants on NW designs.
Non-isothermal processes during the drying of bare soil: Model Development and Validation
NASA Astrophysics Data System (ADS)
Sleep, B.; Talebi, A.; O'Carrol, D. M.
2017-12-01
Several coupled liquid water, water vapor, and heat transfer models have been developed either to study non-isothermal processes in the subsurface immediately below the ground surface, or to predict the evaporative flux from the ground surface. Equilibrium phase change between water and gas phases is typically assumed in these models. Recently, a few studies have questioned this assumption and proposed a coupled model considering kinetic phase change. However, none of these models were validated against real field data. In this study, a non-isothermal coupled model incorporating kinetic phase change was developed and examined against the measured data from a green roof test module. The model also incorporated a new surface boundary condition for water vapor transport at the ground surface. The measured field data included soil moisture content and temperature at different depths up to the depth of 15 cm below the ground surface. Lysimeter data were collected to determine the evaporation rates. Short and long wave radiation, wind velocity, air ambient temperature and relative humidity were measured and used as model input. Field data were collected for a period of three months during the warm seasons in south eastern Canada. The model was calibrated using one drying period and then several other drying periods were simulated. In general, the model underestimated the evaporation rates in the early stage of the drying period, however, the cumulative evaporation was in good agreement with the field data. The model predicted the trends in temperature and moisture content at the different depths in the green roof module. The simulated temperature was lower than the measured temperature for most of the simulation time with the maximum difference of 5 ° C. The simulated moisture content changes had the same temporal trend as the lysimeter data for the events simulated.
Tolerance of uncertainty: Conceptual analysis, integrative model, and implications for healthcare.
Hillen, Marij A; Gutheil, Caitlin M; Strout, Tania D; Smets, Ellen M A; Han, Paul K J
2017-05-01
Uncertainty tolerance (UT) is an important, well-studied phenomenon in health care and many other important domains of life, yet its conceptualization and measurement by researchers in various disciplines have varied substantially and its essential nature remains unclear. The objectives of this study were to: 1) analyze the meaning and logical coherence of UT as conceptualized by developers of UT measures, and 2) develop an integrative conceptual model to guide future empirical research regarding the nature, causes, and effects of UT. A narrative review and conceptual analysis of 18 existing measures of Uncertainty and Ambiguity Tolerance was conducted, focusing on how measure developers in various fields have defined both the "uncertainty" and "tolerance" components of UT-both explicitly through their writings and implicitly through the items constituting their measures. Both explicit and implicit conceptual definitions of uncertainty and tolerance vary substantially and are often poorly and inconsistently specified. A logically coherent, unified understanding or theoretical model of UT is lacking. To address these gaps, we propose a new integrative definition and multidimensional conceptual model that construes UT as the set of negative and positive psychological responses-cognitive, emotional, and behavioral-provoked by the conscious awareness of ignorance about particular aspects of the world. This model synthesizes insights from various disciplines and provides an organizing framework for future research. We discuss how this model can facilitate further empirical and theoretical research to better measure and understand the nature, determinants, and outcomes of UT in health care and other domains of life. Uncertainty tolerance is an important and complex phenomenon requiring more precise and consistent definition. An integrative definition and conceptual model, intended as a tentative and flexible point of departure for future research, adds needed breadth, specificity, and precision to efforts to conceptualize and measure UT. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Hein, Misty J.; Waters, Martha A.; Ruder, Avima M.; Stenzel, Mark R.; Blair, Aaron; Stewart, Patricia A.
2010-01-01
Objectives: Occupational exposure assessment for population-based case–control studies is challenging due to the wide variety of industries and occupations encountered by study participants. We developed and evaluated statistical models to estimate the intensity of exposure to three chlorinated solvents—methylene chloride, 1,1,1-trichloroethane, and trichloroethylene—using a database of air measurement data and associated exposure determinants. Methods: A measurement database was developed after an extensive review of the published industrial hygiene literature. The database of nearly 3000 measurements or summary measurements included sample size, measurement characteristics (year, duration, and type), and several potential exposure determinants associated with the measurements: mechanism of release (e.g. evaporation), process condition, temperature, usage rate, type of ventilation, location, presence of a confined space, and proximity to the source. The natural log-transformed measurement levels in the exposure database were modeled as a function of the measurement characteristics and exposure determinants using maximum likelihood methods. Assuming a single lognormal distribution of the measurements, an arithmetic mean exposure intensity level was estimated for each unique combination of exposure determinants and decade. Results: The proportions of variability in the measurement data explained by the modeled measurement characteristics and exposure determinants were 36, 38, and 54% for methylene chloride, 1,1,1-trichloroethane, and trichloroethylene, respectively. Model parameter estimates for the exposure determinants were in the anticipated direction. Exposure intensity estimates were plausible and exhibited internal consistency, but the ability to evaluate validity was limited. Conclusions: These prediction models can be used to estimate chlorinated solvent exposure intensity for jobs reported by population-based case–control study participants that have sufficiently detailed information regarding the exposure determinants. PMID:20418277
Bio-Optical Measurement and Modeling of the California Current and Southern Oceans
NASA Technical Reports Server (NTRS)
Mitchell, B. Gregg; Mitchell, B. Greg
2003-01-01
The SIMBIOS project's principal goals are to validate standard or experimental ocean color products through detailed bio-optical and biogeochemical measurements, and to combine Ocean optical observations with modeling to contribute to satellite vicarious radiometric calibration and algorithm development.
ERIC Educational Resources Information Center
Nickerson, Carol A.; McClelland, Gary H.
1988-01-01
A methodology is developed based on axiomatic conjoint measurement to accompany a fertility decision-making model. The usefulness of the model is then demonstrated via an application to a study of contraceptive choice (N=100 male and female family-planning clinic clients). Finally, the validity of the model is evaluated. (TJH)
Ozone measurement systems improvements studies
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Guard, K.; Holland, A. C.; Spurling, J. F.
1974-01-01
Results are summarized of an initial study of techniques for measuring atmospheric ozone, carried out as the first phase of a program to improve ozone measurement techniques. The study concentrated on two measurement systems, the electro chemical cell (ECC) ozonesonde and the Dobson ozone spectrophotometer, and consisted of two tasks. The first task consisted of error modeling and system error analysis of the two measurement systems. Under the second task a Monte-Carlo model of the Dobson ozone measurement technique was developed and programmed for computer operation.
High Frequency Acoustic Reflection and Transmission in Ocean Sediments
2003-09-30
Development of a physical model of high-frequency acoustic interaction with the ocean floor, including penetration through and reflection from smooth and...experiments and additional laboratory measurements in the ARL:UT sand tank, an improved model of sediment acoustics will be developed that is...distinct areas of concentration: development of a broadband the oretical model to describe the acoustic interaction with the ocean floor in littoral
Ocean Raman Scattering in Satellite Backscatter UV Measurements
NASA Technical Reports Server (NTRS)
Vasilkov, Alexander P.; Joiner, Joanna; Gleason, James; Bhartia, Pawan; Bhartia, P. K. (Technical Monitor)
2002-01-01
Ocean Raman scattering significantly contributes to the filling-in of solar Fraunhofer lines measured by satellite backscatter ultraviolet (buy) instruments in the cloudless atmosphere over clear ocean waters. A model accounting for this effect in buy measurements is developed and compared with observations from the Global Ozone Monitoring Experiment (GONE). The model extends existing models for ocean Raman scattering to the UV spectral range. Ocean Raman scattering radiance is propagated through the atmosphere using a concept of the Lambert equivalent reflectively and an accurate radiative transfer model for Rayleigh scattering. The model and observations can be used to evaluate laboratory measurements of pure water absorption in the UV. The good agreement between model and observations suggests that buy instruments may be useful for estimating chlorophyll content.
Measurements of electrostatic double layer potentials with atomic force microscopy
NASA Astrophysics Data System (ADS)
Giamberardino, Jason
The aim of this thesis is to provide a thorough description of the development of theory and experiment pertaining to the electrostatic double layer (EDL) in aqueous electrolytic systems. The EDL is an important physical element of many systems and its behavior has been of interest to scientists for many decades. Because many areas of science and engineering move to test, build, and understand systems at smaller and smaller scales, this work focuses on nanoscopic experimental investigations of the EDL. In that vein, atomic force microscopy (AFM) will be introduced and discussed as a tool for making high spatial resolution measurements of the solid-liquid interface, culminating in a description of the development of a method for completely characterizing the EDL. This thesis first explores, in a semi-historical fashion, the development of the various models and theories that are used to describe the electrostatic double layer. Later, various experimental techniques and ideas are addressed as ways to make measurements of interesting characteristics of the EDL. Finally, a newly developed approach to measuring the EDL system with AFM is introduced. This approach relies on both implementation of existing theoretical models with slight modifications as well as a unique experimental measurement scheme. The model proposed clears up previous ambiguities in definitions of various parameters pertaining to measurements of the EDL and also can be used to fully characterize the system in a way not yet demonstrated.
NASA Astrophysics Data System (ADS)
Franck, Charmaine C.; Lee, Dave; Espinola, Richard L.; Murrill, Steven R.; Jacobs, Eddie L.; Griffin, Steve T.; Petkie, Douglas T.; Reynolds, Joe
2007-04-01
This paper describes the design and performance of the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's (NVESD), active 0.640-THz imaging testbed, developed in support of the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. The laboratory measurements and standoff images were acquired during the development of a NVESD and Army Research Laboratory terahertz imaging performance model. The imaging testbed is based on a 12-inch-diameter Off-Axis Elliptical (OAE) mirror designed with one focal length at 1 m and the other at 10 m. This paper will describe the design considerations of the OAE-mirror, dual-capability, active imaging testbed, as well as measurement/imaging results used to further develop the model.
Modeling of nonequilibrium CO Fourth-Positive and CN Violet emission in CO2-N2 gases
NASA Astrophysics Data System (ADS)
Johnston, C. O.; Brandis, A. M.
2014-12-01
This work develops a chemical kinetic rate model for simulating nonequilibrium radiation from CO2-N2 gases, representative of Mars or Venus entry shock layers. Using recent EAST shock tube measurements of nonequilibrium CO 4th Positive and CN Violet emission at pressures and velocities ranging from 0.10 to 1.0 Torr and 6 to 8 km/s, the rate model is developed through an optimization procedure that minimizes the disagreement between the measured and simulated nonequilibrium radiance profiles. Only the dissociation rates of CO2, CO, and NO, along with the CN + O and CO + N rates were treated as unknown in this optimization procedure, as the nonequilibrium radiance was found to be most sensitive to them. The other rates were set to recent values from the literature. Increases in over a factor of 5 in the CO dissociation rate relative to the previous widely used value were found to provide the best agreement with measurements, while the CO2 rate was not changed. The developed model is found to capture the measured nonequilibrium radiance of CO 4th Positive and CN Violet within error bars of ±30%.
Cardiovascular oscillations: in search of a nonlinear parametric model
NASA Astrophysics Data System (ADS)
Bandrivskyy, Andriy; Luchinsky, Dmitry; McClintock, Peter V.; Smelyanskiy, Vadim; Stefanovska, Aneta; Timucin, Dogan
2003-05-01
We suggest a fresh approach to the modeling of the human cardiovascular system. Taking advantage of a new Bayesian inference technique, able to deal with stochastic nonlinear systems, we show that one can estimate parameters for models of the cardiovascular system directly from measured time series. We present preliminary results of inference of parameters of a model of coupled oscillators from measured cardiovascular data addressing cardiorespiratory interaction. We argue that the inference technique offers a very promising tool for the modeling, able to contribute significantly towards the solution of a long standing challenge -- development of new diagnostic techniques based on noninvasive measurements.
Surface temperatures and glassy state investigations in tribology
NASA Technical Reports Server (NTRS)
Bair, S.; Winer, W. O.
1979-01-01
The limiting shear stress shear rheological model was applied to property measurements pursuant to the use of the constitutive equation and the application of the constitutive equation to elastrohydrodynamic (EHD) traction. Experimental techniques were developed to subject materials to isothermal compression which is similar to the history the materials were subjected to in EHD contacts. In addition, an apparatus was developed for measuring the shear stress-strain behavior of solid lubricating materials. Four commercially available materials were examined under pressure. They exhibit elastic and limiting shear stress behavior similar to that of liquid lubricants. The application of the limiting shear stress model to traction predictions was extended employing the primary materials properties measured in the laboratory. The shear rheological model was also applied to a Grubin-like EHD inlet analysis for predicting film thicknesses when employing the limiting shear stress model material behavior.
New technique for oil backstreaming contamination measurements
NASA Technical Reports Server (NTRS)
Alterovitz, S. A.; Speier, H. J.; Sieg, R. M.; Drotos, M. N.; Dunning, J. E.
1992-01-01
The backstreaming contamination in the Space Power Facility, Ohio, was measured using small size clean silicon wafers as contamination sensors placed at all measurement sites. Two ellipsometric models were developed to measure the oil film with the contamination film refractive index of DC 705: a continuous, homogeneous film and islands of oil with the islands varying in coverage fraction and height. The island model improved the ellipsometric analysis quality parameter by up to two orders of magnitude. The continuous film model overestimated the oil volume by about 50 percent.
Fischer-Tropsch Slurry Reactor modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soong, Y.; Gamwo, I.K.; Harke, F.W.
1995-12-31
This paper reports experimental and theoretical results on hydrodynamic studies. The experiments were conducted in a hot-pressurized Slurry-Bubble Column Reactor (SBCR). It includes experimental results of Drakeol-10 oil/nitrogen/glass beads hydrodynamic study and the development of an ultrasonic technique for measuring solids concentration. A model to describe the flow behavior in reactors was developed. The hydrodynamic properties in a 10.16 cm diameter bubble column with a perforated-plate gas distributor were studied at pressures ranging from 0.1 to 1.36 MPa, and at temperatures from 20 to 200{degrees}C, using a dual hot-wire probe with nitrogen, glass beads, and Drakeol-10 oil as the gas,more » solid, and liquid phase, respectively. It was found that the addition of 20 oil wt% glass beads in the system has a slight effect on the average gas holdup and bubble size. A well-posed three-dimensional model for bed dynamics was developed from an ill-posed model. The new model has computed solid holdup distributions consistent with experimental observations with no artificial {open_quotes}fountain{close_quotes} as predicted by the earlier model. The model can be applied to a variety of multiphase flows of practical interest. An ultrasonic technique is being developed to measure solids concentration in a three-phase slurry reactor. Preliminary measurements have been made on slurries consisting of molten paraffin wax, glass beads, and nitrogen bubbles at 180 {degrees}C and 0.1 MPa. The data show that both the sound speed and attenuation are well-defined functions of both the solid and gas concentrations in the slurries. The results suggest possibilities to directly measure solids concentration during the operation of an autoclave reactor containing molten wax.« less
Evaluation of a locally homogeneous flow model of spray combustion
NASA Technical Reports Server (NTRS)
Mao, C. P.; Szekely, G. A., Jr.; Faeth, G. M.
1980-01-01
A model of spray combustion which employs a second-order turbulence model was developed. The assumption of locally homogeneous flow is made, implying infinitely fast transport rates between the phase. Measurements to test the model were completed for a gaseous n-propane flame and an air atomized n-pentane spray flame, burning in stagnant air at atmospheric pressure. Profiles of mean velocity and temperature, as well as velocity fluctuations and Reynolds stress, were measured in the flames. The predictions for the gas flame were in excellent agreement with the measurements. The predictions for the spray were qualitatively correct, but effects of finite rate interphase transport were evident, resulting in a overstimation of the rate development of the flow. Predictions of spray penetration length at high pressures, including supercritical combustion conditions, were also completed for comparison with earlier measurements. Test conditions involved a pressure atomized n-pentane spray, burning in stagnant air at pressures of 3, 5, and 9 MPa. The comparison between predictions and measurements was fair. This is not a very sensitive test of the model, however, and further high pressure experimental and theoretical results are needed before a satisfactory assessment of the locally homogeneous flow approximation can be made.
A simple-source model of military jet aircraft noise
NASA Astrophysics Data System (ADS)
Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.
2010-10-01
The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.
Temperature Measurement and Numerical Prediction in Machining Inconel 718.
Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-06-30
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.
NASA Astrophysics Data System (ADS)
Dutton, Andrew William
1993-12-01
A combined numerical and experimental system for tissue heat transfer analysis was developed. The goal was to develop an integrated set of tools for studying the problem of providing accurate temperature estimation for use in hyperthermia treatment planning in a clinical environment. The completed system combines (1) Magnetic Resonance Angiography (MRA) to non-destructively measure the velocity field in situ, (2) the Streamwise Upwind Petrov-Galerkin finite element solution to the 3D steady state convective energy equation (CEE), (3) a medical image based automatic 3D mesh generator, and (4) a Gaussian type estimator to determine unknown thermal model parameters such as thermal conductivity, blood perfusion, and blood velocities from measured temperature data. The system was capable of using any combination of three thermal models (1) the Convective Energy Equation (CEE), (2) the Bioheat Transfer Equation (BHTE), and (3) the Effective Thermal Conductivity Equation (ETCE) Incorporation of the theoretically correct CEE was a significant theoretical advance over approximate models made possible by the use of MRA to directly measure the 3D velocity field in situ. Experiments were carried out in a perfused alcohol fixed canine liver with hyperthermia induced through scanned focused ultrasound Velocity fields were measured using Phase Contrast Angiography. The complete system was then used to (1) develop a 3D finite element model based upon user traced outlines over a series of MR images of the liver and (2) simulate temperatures at steady state using the CEE, BHTE, and ETCE thermal models in conjunction with the gauss estimator. Results of using the system on an in vitro liver preparation indicate the need for improved accuracy in the MRA scans and accurate spatial registration between the thermocouple junctions, the measured velocity field, and the scanned ultrasound power No individual thermal model was able to meet the desired accuracy of 0.5 deg C, the resolution desired for prognostic evaluation of a treatment However the CEE model did produce the expected asymmetric results while the BHTE and ETCE, used in their simplest forms of homogeneous properties, produced symmetric results. Experimental measurements tended to show marked asymmetries which suggests further development of the CEE thermal model to be the most promising.
A comprehensive constitutive law for waxy crude oil: a thixotropic yield stress fluid.
Dimitriou, Christopher J; McKinley, Gareth H
2014-09-21
Guided by a series of discriminating rheometric tests, we develop a new constitutive model that can quantitatively predict the key rheological features of waxy crude oils. We first develop a series of model crude oils, which are characterized by a complex thixotropic and yielding behavior that strongly depends on the shear history of the sample. We then outline the development of an appropriate preparation protocol for carrying out rheological measurements, to ensure consistent and reproducible initial conditions. We use RheoPIV measurements of the local kinematics within the fluid under imposed deformations in order to validate the selection of a particular protocol. Velocimetric measurements are also used to document the presence of material instabilities within the model crude oil under conditions of imposed steady shearing. These instabilities are a result of the underlying non-monotonic steady flow curve of the material. Three distinct deformation histories are then used to probe the material's constitutive response. These deformations are steady shear, transient response to startup of steady shear with different aging times, and large amplitude oscillatory shear (LAOS). The material response to these three different flows is used to motivate the development of an appropriate constitutive model. This model (termed the IKH model) is based on a framework adopted from plasticity theory and implements an additive strain decomposition into characteristic reversible (elastic) and irreversible (plastic) contributions, coupled with the physical processes of isotropic and kinematic hardening. Comparisons of experimental to simulated response for all three flows show good quantitative agreement, validating the chosen approach for developing constitutive models for this class of materials.
Patel, Shyamal; McGinnis, Ryan S; Silva, Ikaro; DiCristofaro, Steve; Mahadevan, Nikhil; Jortberg, Elise; Franco, Jaime; Martin, Albert; Lust, Joseph; Raj, Milan; McGrane, Bryan; DePetrillo, Paolo; Aranyosi, A J; Ceruolo, Melissa; Pindado, Jesus; Ghaffari, Roozbeh
2016-08-01
Wearable sensors have the potential to enable clinical-grade ambulatory health monitoring outside the clinic. Technological advances have enabled development of devices that can measure vital signs with great precision and significant progress has been made towards extracting clinically meaningful information from these devices in research studies. However, translating measurement accuracies achieved in the controlled settings such as the lab and clinic to unconstrained environments such as the home remains a challenge. In this paper, we present a novel wearable computing platform for unobtrusive collection of labeled datasets and a new paradigm for continuous development, deployment and evaluation of machine learning models to ensure robust model performance as we transition from the lab to home. Using this system, we train activity classification models across two studies and track changes in model performance as we go from constrained to unconstrained settings.
To maximize the value of toxicological data in development of human health risk assessment models of inhaled elongated mineral particles, improvements in human dosimetry modeling are needed. In order to extend the dosimetry model of deposited fibers (Asgharian et aI., Johnson 201...
Measurement model as a means for studying the process of emotion origination
NASA Astrophysics Data System (ADS)
Taymanov, R.; Baksheeva, Iu; Sapozhnikova, K.; Chunovkina, A.
2016-11-01
In the last edition of the International Vocabulary of Metrology the concept “measurement” was spread outside the field of physical quantities. This fact makes it relevant to analyze the experience of developing the models of multidimensional quantity measurements. The model of measurements of expected emotions caused by musical and other acoustic impacts, is considered. The model relies upon a hypothesis of a nonlinear conversion of acoustic signals to a neurophysiological reaction giving rise to emotion. Methods for checking this hypothesis as well as experimental results are given.
Chromatic Image Analysis For Quantitative Thermal Mapping
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
1995-01-01
Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.
NASA Technical Reports Server (NTRS)
1992-01-01
The objectives, status, and accomplishments of the research tasks supported under the NASA Upper Atmosphere Research Program (UARP) are presented. The topics covered include the following: balloon-borne in situ measurements; balloon-borne remote measurements; ground-based measurements; aircraft-borne measurements; rocket-borne measurements; instrument development; reaction kinetics and photochemistry; spectroscopy; stratospheric dynamics and related analysis; stratospheric chemistry, analysis, and related modeling; and global chemical modeling.
Measurement-based reliability prediction methodology. M.S. Thesis
NASA Technical Reports Server (NTRS)
Linn, Linda Shen
1991-01-01
In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.
NASA Technical Reports Server (NTRS)
Hope, Allen S.
1993-01-01
The broad goal of the research summarized in this report was 'To facilitate the evaluation of regional evapotranspiration (ET) through the combined use of solar reflective and thermal infrared radiance observations.' The specific objectives stated by Goward and Hope (1986) were to: (1) investigate the nature of the relationship between surface temperature (T(sub S)) and the normalized difference vegetation index (NDVI) and develop an understanding of this relationship in terms of energy exchange processes, particularly latent flux heat (LE); (2) develop procedures to estimate large area LE using combined T(sub S) and NDVI observations obtained from AVHRR data; and (3) determine whether measurements derived from satellite observations relate directly to measurements made at the surface or from aircraft platforms. Both empirical and modeling studies were used to develop an understanding of the T(sub S)-NDVI relationship. Most of the modeling was based on the Tergra model as originally proposed by Goward. This model, and modified versions developed in this project, simulates the flows of water and energy in the soil-plant-atmosphere system using meteorological, soil and vegetation inputs. Model outputs are the diurnal course of soil moisture, T(sub S), LE and the other individual components of the surface energy balance.
Perkins, Kim S.
2008-01-01
Sediments are believed to comprise as much as 50 percent of the Snake River Plain aquifer thickness in some locations within the Idaho National Laboratory. However, the hydraulic properties of these deep sediments have not been well characterized and they are not represented explicitly in the current conceptual model of subregional scale ground-water flow. The purpose of this study is to evaluate the nature of the sedimentary material within the aquifer and to test the applicability of a site-specific property-transfer model developed for the sedimentary interbeds of the unsaturated zone. Saturated hydraulic conductivity (Ksat) was measured for 10 core samples from sedimentary interbeds within the Snake River Plain aquifer and also estimated using the property-transfer model. The property-transfer model for predicting Ksat was previously developed using a multiple linear-regression technique with bulk physical-property measurements (bulk density [pbulk], the median particle diameter, and the uniformity coefficient) as the explanatory variables. The model systematically underestimates Ksat,typically by about a factor of 10, which likely is due to higher bulk-density values for the aquifer samples compared to the samples from the unsaturated zone upon which the model was developed. Linear relations between the logarithm of Ksat and pbulk also were explored for comparison.
Evaluating digital libraries in the health sector. Part 1: measuring inputs and outputs.
Cullen, Rowena
2003-12-01
This is the first part of a two-part paper which explores methods that can be used to evaluate digital libraries in the health sector. In this first part, some approaches to evaluation that have been proposed for mainstream digital information services are examined for their suitability to provide models for the health sector. The paper summarizes some major national and collaborative initiatives to develop measures for digital libraries, and analyses these approaches in terms of their relationship to traditional measures of library performance, which are focused on inputs and outputs, and their relevance to current debates among health information specialists. The second part* looks more specifically at evaluative models based on outcomes, and models being developed in the health sector.
The development of a short domain-general measure of working memory capacity.
Oswald, Frederick L; McAbee, Samuel T; Redick, Thomas S; Hambrick, David Z
2015-12-01
Working memory capacity is one of the most frequently measured individual difference constructs in cognitive psychology and related fields. However, implementation of complex span and other working memory measures is generally time-consuming for administrators and examinees alike. Because researchers often must manage the tension between limited testing time and measuring numerous constructs reliably, a short and effective measure of working memory capacity would often be a major practical benefit in future research efforts. The current study developed a shortened computerized domain-general measure of working memory capacity by representatively sampling items from three existing complex working memory span tasks: operation span, reading span, and symmetry span. Using a large archival data set (Study 1, N = 4,845), we developed and applied a principled strategy for developing the reduced measure, based on testing a series of confirmatory factor analysis models. Adequate fit indices from these models lent support to this strategy. The resulting shortened measure was then administered to a second independent sample (Study 2, N = 172), demonstrating that the new measure saves roughly 15 min (30%) of testing time on average, and even up to 25 min depending on the test-taker. On the basis of these initial promising findings, several directions for future research are discussed.
ERIC Educational Resources Information Center
Vezeau, Susan Lynn; Powell, Robert B.; Stern, Marc J.; Moore, D. DeWayne; Wright, Brett A.
2017-01-01
This investigation examines the development of two scales that measure elaboration and behaviors associated with stewardship in children. The scales were developed using confirmatory factor analysis to investigate their construct validity, reliability, and psychometric properties. Results suggest that a second-order factor model structure provides…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashjaee, M.; Roomina, M.R.; Ghafouri-Azar, R.
1993-05-01
Two computational methods for calculating hourly, daily, and monthly average values of direct, diffuse, and global solar radiation on horizontal collectors have been presented in this article for location with different latitude, altitude, and atmospheric conditions in Iran. These methods were developed using two different independent sets of measured data from the Iranian Meteorological Organization (IMO) for two cities in Iran (Tehran and Isfahan) during 14 years of measurement for Tehran and 4 years of measurement for Isfahan. Comparison of calculated monthly average global solar radiation, using the two models for Tehran and Isfahan with measured data from the IMO,more » has indicated a good agreement between them. Then these developed methods were extended to another location (city of Bandar-Abbas), where measured data are not available. But the work of Daneshyar predicts its monthly global radiation. The maximum discrepancy of 7% between the developed models and the work of Daneshyar was observed.« less
A Model for the Education of Gifted Learners in Lebanon
ERIC Educational Resources Information Center
Sarouphim, Ketty M.
2010-01-01
The purpose of this paper is to present a model for developing a comprehensive system of education for gifted learners in Lebanon. The model consists of three phases and includes key elements for establishing gifted education in the country, such as raising community awareness, adopting valid identification measures, and developing effective…
A survival model for individual shortleaf pine trees in even-aged natural stands
Thomas B. Lynch; Michael M. Huebschmann; Paul A. Murphy
2000-01-01
A model was developed that predicts the probability of survival for individual shortleaf pine (Pinus echinata Mill.) trees growing in even-aged natural stands. Data for model development were obtained from the first two measurements of permanently established plots located in naturally occurring shortleaf pine forests on the Ouachita and...
A Measure for Evaluating the Effectiveness of Teen Pregnancy Prevention Programs.
ERIC Educational Resources Information Center
Somers, Cheryl L.; Johnson, Stephanie A.; Sawilowksy, Shlomo S.
2002-01-01
The Teen Attitude Pregnancy Scale (TAPS) was developed to measure teen attitudes and intentions regarding teenage pregnancy. The model demonstrated good internal consistency and concurrent validity for the samples in this study. Analysis revealed evidence of validity for this model. (JDM)
F-106 data summary and model results relative to threat criteria and protection design analysis
NASA Technical Reports Server (NTRS)
Pitts, F. L.; Finelli, G. B.; Perala, R. A.; Rudolph, T. H.
1986-01-01
The NASA F-106 has acquired considerable data on the rates-of-change of electromagnetic parameters on the aircraft surface during 690 direct lightning strikes while penetrating thunderstorms at altitudes ranging from 15,000 to 40,000 feet. These in-situ measurements have provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircrat appropriate for determining lightning indirect effects on aircraft. The data are presently being used in updating previous lightning criteria and standards developed over the years from ground-based measurements. The new lightning standards will, therefore, be the first which reflect actual aircraft responses measured at flight altitudes. The modeling technique developed to interpret and understand the direct strike electromagnetic data acquired on the F-106 provides a means to model the interaction of the lightning channel with the F-106. The reasonable results obtained with the model, compared to measured responses, yield confidence that the model may be credibly applied to other aircraft types and uses in the prediction of internal coupling effects in the design of lightning protection for new aircraft.
NASA Astrophysics Data System (ADS)
Medina, Tait Runnfeldt
The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and explicates a link between theoretical conceptions of cross-national comparability and a statistical method. Second, it provides a clear and detailed discussion of model identification in multiple-group confirmatory factor analysis that is missing from the literature. This discussion sets the stage for the introduction of the identification problem within multiple-group confirmatory factor analysis with partial measurement invariance and the alternative approach to model identification employed by the alignment method. Third, it offers the first pedagogical presentation of the alignment method using a sociologically relevant example.
Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements
Wang, Liming; Huang, Jiaji; Yuan, Xin; ...
2015-09-17
The measurement matrix employed in compressive sensing typically cannot be known precisely a priori and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are establishedmore » based on newly derived concentration-of-measure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive X-ray imaging system.« less
Prediction models for transfer of arsenic from soil to corn grain (Zea mays L.).
Yang, Hua; Li, Zhaojun; Long, Jian; Liang, Yongchao; Xue, Jianming; Davis, Murray; He, Wenxiang
2016-04-01
In this study, the transfer of arsenic (As) from soil to corn grain was investigated in 18 soils collected from throughout China. The soils were treated with three concentrations of As and the transfer characteristics were investigated in the corn grain cultivar Zhengdan 958 in a greenhouse experiment. Through stepwise multiple-linear regression analysis, prediction models were developed combining the As bioconcentration factor (BCF) of Zhengdan 958 and soil pH, organic matter (OM) content, and cation exchange capacity (CEC). The possibility of applying the Zhengdan 958 model to other cultivars was tested through a cross-cultivar extrapolation approach. The results showed that the As concentration in corn grain was positively correlated with soil pH. When the prediction model was applied to non-model cultivars, the ratio ranges between the predicted and measured BCF values were within a twofold interval between predicted and measured values. The ratios were close to a 1:1 relationship between predicted and measured values. It was also found that the prediction model (Log [BCF]=0.064 pH-2.297) could effectively reduce the measured BCF variability for all non-model corn cultivars. The novel model is firstly developed for As concentration in crop grain from soil, which will be very useful for understanding the As risk in soil environment.
Process modelling for materials preparation experiments
NASA Technical Reports Server (NTRS)
Rosenberger, Franz; Alexander, J. Iwan D.
1992-01-01
The development is examined of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). The tasks include development of a spectral code for moving boundary problems, kinematic viscosity measurements on liquid MCT at temperatures close to the melting point, and diffusivity measurements on concentrated and supersaturated TGS solutions. A detailed description is given of the work performed for these tasks, together with a summary of the resulting publications and presentations.
Dynamic electrical response of solar cells
NASA Technical Reports Server (NTRS)
Catani, J. P.
1981-01-01
The dynamic response of a solar generator is of primary importance as much for the design and development of electrical power conditioning hardware as for the analysis of electromagnetic compatibility. A mathematical model of photo-batteries was developed on the basis of impedance measurements performed under differing conditions of temperature, light intensity, before and after irradiation. This model was compared with that derived from PN junction theory and to static measurements. These dynamic measurements enabled the refinement of an integration method capable of determining, under normal laboratory conditions, the dynamic response of a generator to operational lighting conditions.
Comparison of SeaWiFS measurements of the Moon with the U.S. Geological Survey lunar model.
Barnes, Robert A; Eplee, Robert E; Patt, Frederick S; Kieffer, Hugh H; Stone, Thomas C; Meister, Gerhard; Butler, James J; McClain, Charles R
2004-11-01
The Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) has made monthly observations of the Moon since 1997. Using 66 monthly measurements, the SeaWiFS calibration team has developed a correction for the instrument's on-orbit response changes. Concurrently, a lunar irradiance model has been developed by the U.S. Geological Survey (USGS) from extensive Earth-based observations of the Moon. The lunar irradiances measured by SeaWiFS are compared with the USGS model. The comparison shows essentially identical response histories for SeaWiFS, with differences from the model of less than 0.05% per thousand days in the long-term trends. From the SeaWiFS experience we have learned that it is important to view the entire lunar image at a constant phase angle from measurement to measurement and to understand, as best as possible, the size of each lunar image. However, a constant phase angle is not required for using the USGS model. With a long-term satellite lunar data set it is possible to determine instrument changes at a quality level approximating that from the USGS lunar model. However, early in a mission, when the dependence on factors such as phase and libration cannot be adequately determined from satellite measurements alone, the USGS model is critical to an understanding of trends in instruments that use the Moon for calibration. This is the case for SeaWiFS.
NASA Astrophysics Data System (ADS)
Arai, Yukiko; Aoki, Hitoshi; Abe, Fumitaka; Todoroki, Shunichiro; Khatami, Ramin; Kazumi, Masaki; Totsuka, Takuya; Wang, Taifeng; Kobayashi, Haruo
2015-04-01
1/f noise is one of the most important characteristics for designing analog/RF circuits including operational amplifiers and oscillators. We have analyzed and developed a novel 1/f noise model in the strong inversion, saturation, and sub-threshold regions based on SPICE2 type model used in any public metal-oxide-semiconductor field-effect transistor (MOSFET) models developed by the University of California, Berkeley. Our model contains two noise generation mechanisms that are mobility and interface trap number fluctuations. Noise variability dependent on gate voltage is also newly implemented in our model. The proposed model has been implemented in BSIM4 model of a SPICE3 compatible circuit simulator. Parameters of the proposed model are extracted with 1/f noise measurements for simulation verifications. The simulation results show excellent agreements between measurement and simulations.
Computational modeling of the obstructive lung diseases asthma and COPD
2014-01-01
Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow limitation and pose a huge burden to society. These obstructive lung diseases impact the lung physiology across multiple biological scales. Environmental stimuli are introduced via inhalation at the organ scale, and consequently impact upon the tissue, cellular and sub-cellular scale by triggering signaling pathways. These changes are propagated upwards to the organ level again and vice versa. In order to understand the pathophysiology behind these diseases we need to integrate and understand changes occurring across these scales and this is the driving force for multiscale computational modeling. There is an urgent need for improved diagnosis and assessment of obstructive lung diseases. Standard clinical measures are based on global function tests which ignore the highly heterogeneous regional changes that are characteristic of obstructive lung disease pathophysiology. Advances in scanning technology such as hyperpolarized gas MRI has led to new regional measurements of ventilation, perfusion and gas diffusion in the lungs, while new image processing techniques allow these measures to be combined with information from structural imaging such as Computed Tomography (CT). However, it is not yet known how to derive clinical measures for obstructive diseases from this wealth of new data. Computational modeling offers a powerful approach for investigating this relationship between imaging measurements and disease severity, and understanding the effects of different disease subtypes, which is key to developing improved diagnostic methods. Gaining an understanding of a system as complex as the respiratory system is difficult if not impossible via experimental methods alone. Computational models offer a complementary method to unravel the structure-function relationships occurring within a multiscale, multiphysics system such as this. Here we review the current state-of-the-art in techniques developed for pulmonary image analysis, development of structural models of the respiratory system and predictions of function within these models. We discuss application of modeling techniques to obstructive lung diseases, namely asthma and emphysema and the use of models to predict response to therapy. Finally we introduce a large European project, AirPROM that is developing multiscale models to investigate structure-function relationships in asthma and COPD. PMID:25471125
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Brooks, Dusty Marie
In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less
Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka
2017-01-01
Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...
ERIC Educational Resources Information Center
Torres, Sergio; Powers, Judith L.
2009-01-01
In the exciting, "out of this world" activity described here, students measure the Earth using meter sticks while measuring their shadows in two distant locations. To obtain the size of the Earth, students discover the connection between the measurements of the shadows and a model of the spherical Earth following the method developed by…
The Challenge and Opportunity of Parental Involvement in Juvenile Justice Services.
Burke, Jeffrey D; Mulvey, Edward P; Schubert, Carol A; Garbin, Sara R
2014-04-01
The active involvement of parents - whether as recipients, extenders, or managers of services - during their youth's experience with the juvenile justice system is widely assumed to be crucial. Parents and family advocacy groups note persisting concerns with the degree to which successful parental involvement is achieved. Justice system providers are highly motivated and actively working to make improvements. These coalescing interests provide a strong motivation for innovation and improvement regarding family involvement, but the likely success of these efforts is severely limited by the absence of any detailed definition of parental involvement or validated measure of this construct. Determining whether and how parental involvement works in juvenile justice services depends on the development of clear models and sound measurement. Efforts in other child serving systems offer guidance to achieve this goal. A multidimensional working model developed with parents involved in child protective services is presented as a template for developing a model for parental involvement in juvenile justice. Features of the model requiring changes to make it more adaptable to juvenile justice are identified. A systematic research agenda for developing methods and measures to meet the present demands for enhanced parental involvement in juvenile justice services is presented.
The Challenge and Opportunity of Parental Involvement in Juvenile Justice Services
Burke, Jeffrey D.; Mulvey, Edward P.; Schubert, Carol A.; Garbin, Sara R.
2014-01-01
The active involvement of parents – whether as recipients, extenders, or managers of services - during their youth’s experience with the juvenile justice system is widely assumed to be crucial. Parents and family advocacy groups note persisting concerns with the degree to which successful parental involvement is achieved. Justice system providers are highly motivated and actively working to make improvements. These coalescing interests provide a strong motivation for innovation and improvement regarding family involvement, but the likely success of these efforts is severely limited by the absence of any detailed definition of parental involvement or validated measure of this construct. Determining whether and how parental involvement works in juvenile justice services depends on the development of clear models and sound measurement. Efforts in other child serving systems offer guidance to achieve this goal. A multidimensional working model developed with parents involved in child protective services is presented as a template for developing a model for parental involvement in juvenile justice. Features of the model requiring changes to make it more adaptable to juvenile justice are identified. A systematic research agenda for developing methods and measures to meet the present demands for enhanced parental involvement in juvenile justice services is presented. PMID:24748704
ECO-DRIVING MODELING ENVIRONMENT
DOT National Transportation Integrated Search
2015-11-01
This research project aims to examine the eco-driving modeling capabilities of different traffic modeling tools available and to develop a driver-simulator-based eco-driving modeling tool to evaluate driver behavior and to reliably estimate or measur...
A continuous-wave ultrasound system for displacement amplitude and phase measurement.
Finneran, James J; Hastings, Mardi C
2004-06-01
A noninvasive, continuous-wave ultrasonic technique was developed to measure the displacement amplitude and phase of mechanical structures. The measurement system was based on a method developed by Rogers and Hastings ["Noninvasive vibration measurement system and method for measuring amplitude of vibration of tissue in an object being investigated," U.S. Patent No. 4,819,643 (1989)] and expanded to include phase measurement. A low-frequency sound source was used to generate harmonic vibrations in a target of interest. The target was simultaneously insonified by a low-power, continuous-wave ultrasonic source. Reflected ultrasound was phase modulated by the target motion and detected with a separate ultrasonic transducer. The target displacement amplitude was obtained directly from the received ultrasound frequency spectrum by comparing the carrier and sideband amplitudes. Phase information was obtained by demodulating the received signal using a double-balanced mixer and low-pass filter. A theoretical model for the ultrasonic receiver field is also presented. This model coupled existing models for focused piston radiators and for pulse-echo ultrasonic fields. Experimental measurements of the resulting receiver fields compared favorably with theoretical predictions.
NASA Astrophysics Data System (ADS)
Abiriand Bhekisipho Twala, Olufunminiyi
2017-08-01
In this paper, a multilayer feedforward neural network with Bayesian regularization constitutive model is developed for alloy 316L during high strain rate and high temperature plastic deformation. The input variables are strain rate, temperature and strain while the output value is the flow stress of the material. The results show that the use of Bayesian regularized technique reduces the potential of overfitting and overtraining. The prediction quality of the model is thereby improved. The model predictions are in good agreement with experimental measurements. The measurement data used for the network training and model comparison were taken from relevant literature. The developed model is robust as it can be generalized to deformation conditions slightly below or above the training dataset.
NASA Astrophysics Data System (ADS)
Ni, C.; Huang, Y.; Lu, C.
2012-12-01
The pumping-induced land subsidence events are typically founded in coastal aquifers in Taiwan especially in the areas of lower alluvial fans. Previous investigations have recognized the irreversible situation for an aquifer deformation even if the pumped water is significantly reduced or stopped. Long-term monitoring projects on land subsidence in Choshui alluvial fan in central Taiwan have improved the understanding of the deformations in the aquifer system. To characterization the detailed land subsidence mechanism, this study develops an inverse numerical model to estimate the deformation parameters such as the specific storage (Ss) and vertical hydraulic conductivity (Kv) for interbeds. Similar to the concept of Hydraulic tomography survey (HTS), the developed model employs the iterative cokriging estimator to improve the accuracy of estimating deformation parameters. A one-dimensional numerical example is employed to assess the accuracy of the developed inverse model. The developed model is then applied to field-scale data from compaction monitoring wells (CMW) installed in the lower Choshui River fan. Results of the synthetic example show that the developed inverse model can reproduce well the predefined geologic features of the synthetic aquifer. The model provides better estimations of Kv patterns and magnitudes. Slightly less detail of the Ss was obtained due to the insensitivity of transient stresses for specified sampling times. Without prior information from field measurements, the developed model associated with deformation measurements form CMW can estimate Kv and Ss fields with great spatial resolution.
Calibrating Charisma: The many-facet Rasch model for leader measurement and automated coaching
NASA Astrophysics Data System (ADS)
Barney, Matt
2016-11-01
No one is a leader unless others follow. Consequently, leadership is fundamentally a social judgment construct, and may be best measured via a Many Facet Rasch Model designed for this purpose. Uniquely, the MFRM allows for objective, accurate and precise estimation of leader attributes, along with identification of rater biases and other distortions of the available information. This presentation will outline a mobile computer-adaptive measurement system that measures and develops charisma, among others. Uniquely, the approach calibrates and mass-personalizes artificially intelligent, Rasch-calibrated electronic coaching that is neither too hard nor too easy but “just right” to help each unique leader develop improved charisma.
NASA Astrophysics Data System (ADS)
Beecken, B. P.; Fossum, E. R.
1996-07-01
Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.
Tulsky, David S.; Jette, Alan; Kisala, Pamela A.; Kalpakjian, Claire; Dijkers, Marcel P.; Whiteneck, Gale; Ni, Pengsheng; Kirshblum, Steven; Charlifue, Susan; Heinemann, Allen W.; Forchheimer, Martin; Slavin, Mary; Houlihan, Bethlyn; Tate, Denise; Dyson-Hudson, Trevor; Fyffe, Denise; Williams, Steve; Zanca, Jeanne
2012-01-01
Objective To develop a comprehensive set of patient reported items to assess multiple aspects of physical functioning relevant to the lives of people with spinal cord injury (SCI) and to evaluate the underlying structure of physical functioning. Design Cross-sectional Setting Inpatient and community Participants Item pools of physical functioning were developed, refined and field tested in a large sample of 855 individuals with traumatic spinal cord injury stratified by diagnosis, severity, and time since injury Interventions None Main Outcome Measure SCI-FI measurement system Results Confirmatory factor analysis (CFA) indicated that a 5-factor model, including basic mobility, ambulation, wheelchair mobility, self care, and fine motor, had the best model fit and was most closely aligned conceptually with feedback received from individuals with SCI and SCI clinicians. When just the items making up basic mobility were tested in CFA, the fit statistics indicate strong support for a unidimensional model. Similar results were demonstrated for each of the other four factors indicating unidimensional models. Conclusions Though unidimensional or 2-factor (mobility and upper extremity) models of physical functioning make up outcomes measures in the general population, the underlying structure of physical function in SCI is more complex. A 5-factor solution allows for comprehensive assessment of key domain areas of physical functioning. These results informed the structure and development of the SCI-FI measurement system of physical functioning. PMID:22609299
2013-01-01
Background The integration of behavioral health services into primary care is increasingly popular, yet fidelity of implementation in this area has been infrequently assessed due to the few measurement tools available. A sentinel indicator of fidelity of implementation is provider adherence, or utilization of prescribed procedures and engagement in model-specific behaviors. This study aimed to develop the first self-report measure of behavioral health provider adherence for co-located, collaborative care, a commonly adopted model of behavioral health service delivery in primary care. Methods A preliminary 56-item measure was developed by the research team to represent critical components of adherence among behavioral health providers. To ensure the content validity of the measure, a modified Delphi study was conducted using a panel of co-located, collaborative care model experts. During three rounds of emailed surveys, panel members provided qualitative feedback regarding item content while rating each item’s relevance for behavioral health provider practice. Items with consensus ratings of 80% or greater were included in the final adherence measure. Results The panel consisted of 25 experts representing the Department of Veterans Affairs, the Department of Defense, and academic and community health centers (total study response rate of 76%). During the Delphi process, two new items were added to the measure, four items were eliminated, and a high level of consensus was achieved on the remaining 54 items. Experts identified 38 items essential for model adherence, six items compatible (although not essential) for model adherence, and 10 items that represented prohibited behaviors. Item content addressed several domains, but primarily focused on behaviors related to employing a time-limited, brief treatment model, the scope of patient concerns addressed, and interventions used by providers. Conclusions This study yielded the first content valid self-report measure of critical components of collaborative care adherence for use by behavioral health providers in primary care. Although additional psychometric evaluation is necessary, this measure may assist implementation researchers in clarifying how provider behaviors contribute to clinical outcomes. This measure may also assist clinical stakeholders in monitoring implementation and identifying ways to support frontline providers in delivering high quality services. PMID:23406425
ERIC Educational Resources Information Center
Alonzo, Julie; Tindal, Gerald; McCoy, Jan
2005-01-01
This technical report describes the development, pilot testing, and revision of a survey instrument designed to measure secondary school teachers' perceptions of their efficacy working with students from diverse backgrounds. A brief review of relevant literature frames the current study in the context of survey development that is technically…
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason E.
2003-11-01
A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
Modelling dimercaptosuccinic acid (DMSA) plasma kinetics in humans.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Meulenbelt, Jan; Hunault, Claudine C
2016-11-01
No kinetic models presently exist which simulate the effect of chelation therapy on lead blood concentrations in lead poisoning. Our aim was to develop a kinetic model that describes the kinetics of dimercaptosuccinic acid (DMSA; succimer), a commonly used chelating agent, that could be used in developing a lead chelating model. This was a kinetic modelling study. We used a two-compartment model, with a non-systemic gastrointestinal compartment (gut lumen) and the whole body as one systemic compartment. The only data available from the literature were used to calibrate the unknown model parameters. The calibrated model was then validated by comparing its predictions with measured data from three different experimental human studies. The model predicted total DMSA plasma and urine concentrations measured in three healthy volunteers after ingestion of DMSA 10 mg/kg. The model was then validated by using data from three other published studies; it predicted concentrations within a factor of two, representing inter-human variability. A simple kinetic model simulating the kinetics of DMSA in humans has been developed and validated. The interest of this model lies in the future potential to use it to predict blood lead concentrations in lead-poisoned patients treated with DMSA.
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.
Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, J.R.; et al.
This Report summarizes the proceedings of the 2015 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) the new PDF4LHC parton distributions, (III) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (IV) a host of phenomenological studies essential for comparing LHC data from Run I with theoretical predictions and projections for future measurements in Run II, and (V) new developments in Monte Carlo event generators.
Application of remote sensing for prediction and detection of thermal pollution
NASA Technical Reports Server (NTRS)
Veziroglu, T. N.; Lee, S. S.
1974-01-01
The first phase is described of a three year project for the development of a mathematical model for predicting thermal pollution by use of remote sensing measurements. A rigid-lid model was developed, and results were obtained for different wind conditions at Biscayne Bay in South Florida. The design of the measurement system was completed, and instruments needed for the first stage of experiment were acquired, tested, and calibrated. A preliminary research flight was conducted.
ERIC Educational Resources Information Center
Brody, David; Hadar, Linor
2011-01-01
This study explores trajectories of professional growth by teacher educators participating in a professional development community on teaching thinking. Qualitative measures revealed a four stage model of personal professional trajectories: anticipation/curiosity, withdrawal, awareness and change. The model delineates passages traversed by teacher…
Modeling Pubertal Timing and Tempo and Examining Links to Behavior Problems
ERIC Educational Resources Information Center
Beltz, Adriene M.; Corley, Robin P.; Bricker, Josh B.; Wadsworth, Sally J.; Berenbaum, Sheri A.
2014-01-01
Research on the role of puberty in adolescent psychological development requires attention to the meaning and measurement of pubertal development. Particular questions concern the utility of self-report, the need for complex models to describe pubertal development, the psychological significance of pubertal timing vs. tempo, and sex differences in…
An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application
ERIC Educational Resources Information Center
Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth
2016-01-01
Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…
ERIC Educational Resources Information Center
Lahey, Benjamin B.; Applegate, Brooks; Chronis, Andrea M.; Jones, Heather A.; Williams, Stephanie Hall; Loney, Jan; Waldman, Irwin D.
2008-01-01
Lahey and Waldman proposed a developmental propensity model in which three dimensions of children's emotional dispositions are hypothesized to transact with the environment to influence risk for conduct disorder, heterogeneity in conduct disorder, and comorbidity with other disorders. To prepare for future tests of this model, a new measure of…
Scattering of Acoustic Waves from Ocean Boundaries
2013-09-30
of predictive models that can account for the all of the physical processes and variability of acoustic propagation and scattering in ocean...collaboration with Dr. Nicholas Chotiros, particularly for theoretical development of bulk acoustic /sediment modeling and laser roughness measurements...G. Potty and J. Miller. Measurement and modeling of Scholte wave dispersion in coastal waters. In Proc. of Third Int. Conf. on Ocean Acoustics
Thermoviscoplastic model with application to copper
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1988-01-01
A viscoplastic model is developed which is applicable to anisothermal, cyclic, and multiaxial loading conditions. Three internal state variables are used in the model; one to account for kinematic effects, and the other two to account for isotropic effects. One of the isotropic variables is a measure of yield strength, while the other is a measure of limit strength. Each internal state variable evolves through a process of competition between strain hardening and recovery. There is no explicit coupling between dynamic and thermal recovery in any evolutionary equation, which is a useful simplification in the development of the model. The thermodynamic condition of intrinsic dissipation constrains the thermal recovery function of the model. Application of the model is made to copper, and cyclic experiments under isothermal, thermomechanical, and nonproportional loading conditions are considered. Correlations and predictions of the model are representative of observed material behavior.
Assessment of Traffic-Related Noise in Three Cities in the United States
Lee, Eunice Y.; Jerrett, Michael; Ross, Zev; Coogan, Patricia F.; Seto, Edmund Y. W.
2014-01-01
Background Traffic-related noise is a growing public health concern in developing and developed countries due to increasing vehicle traffic. Epidemiological studies have reported associations between noise exposure and high blood pressure, increased risk of hypertension and heart disease, and stress induced by sleep disturbance and annoyance. These findings motivate the need for regular noise assessments within urban areas. This paper assesses the relationships between traffic and noise in three US cities. Methods Noise measurements were conducted in downtown areas in three cities in the United States: Atlanta, Los Angeles, and New York City. For each city, we measured ambient noise levels, and assessed their correlation with simultaneously measured vehicle counts, and with traffic data provided by local Metropolitan Planning Organizations (MPO). Additionally, measured noise levels were compared to noise levels predicted by the Federal Highway Administration’s Traffic Noise Model using (1) simultaneously measured traffic counts or (2) MPO traffic data sources as model input. Results We found substantial variations in traffic and noise within and between cities. Total number of vehicle counts explained a substantial amount of variation in measured ambient noise in Atlanta (78%), Los Angeles (58%), and New York City (62%). Modeled noise levels were moderately correlated with measured noise levels when observed traffic counts were used as model input. Weaker correlations were found when MPO traffic data was used as model input. Conclusions Ambient noise levels measured in all three cities were correlated with traffic data, highlighting the importance of traffic planning in mitigating noise-related health effects. Model performance was sensitive to the traffic data used as input. Future noise studies that use modeled noise estimates should evaluate traffic data quality and should ideally include other factors, such as local roadway, building, and meteorological characteristics. PMID:24792415
Ellison, Christopher A.; Groten, Joel T.; Lorenz, David L.; Koller, Karl S.
2016-10-27
Consistent and reliable sediment data are needed by Federal, State, and local government agencies responsible for monitoring water quality, planning river restoration, quantifying sediment budgets, and evaluating the effectiveness of sediment reduction strategies. Heightened concerns about excessive sediment in rivers and the challenge to reduce costs and eliminate data gaps has guided Federal and State interests in pursuing alternative methods for measuring suspended and bedload sediment. Simple and dependable data collection and estimation techniques are needed to generate hydraulic and water-quality information for areas where data are unavailable or difficult to collect.The U.S. Geological Survey, in cooperation with the Minnesota Pollution Control Agency and the Minnesota Department of Natural Resources, completed a study to evaluate the use of dimensionless sediment rating curves (DSRCs) to accurately predict suspended-sediment concentrations (SSCs), bedload, and annual sediment loads for selected rivers and streams in Minnesota based on data collected during 2007 through 2013. This study included the application of DSRC models developed for a small group of streams located in the San Juan River Basin near Pagosa Springs in southwestern Colorado to rivers in Minnesota. Regionally based DSRC models for Minnesota also were developed and compared to DSRC models from Pagosa Springs, Colorado, to evaluate which model provided more accurate predictions of SSCs and bedload in Minnesota.Multiple measures of goodness-of-fit were developed to assess the effectiveness of DSRC models in predicting SSC and bedload for rivers in Minnesota. More than 600 dimensionless ratio values of SSC, bedload, and streamflow were evaluated and delineated according to Pfankuch stream stability categories of “good/fair” and “poor” to develop four Minnesota-based DSRC models. The basis for Pagosa Springs and Minnesota DSRC model effectiveness was founded on measures of goodness-of-fit that included proximity of the model(s) fitted line to the 95-percent confidence intervals of the site-specific model, Nash-Sutcliffe Efficiency values, model biases, and deviation of annual sediment loads from each model to the annual sediment loads calculated from measured data.Composite plots comparing Pagosa Springs DSRCs, Minnesota DSRCs, site-specific regression models, and measured data indicated that regionally developed DSRCs (Minnesota DSRC models) more closely approximated measured data for nearly every site. Pagosa Springs DSRC models had markedly larger exponents (slopes) when compared to the Minnesota DSRC models and the site-specific regression models, and over-represent SSC and bedload at streamflows exceeding bankfull. The Nash-Sutcliffe Efficiency values for the Minnesota DSRC model for suspended-sediment concentrations closely matched Nash-Sutcliffe Efficiency values of the site-specific regression models for 12 out of 16 sites. Nash-Sutcliffe Efficiency values associated with Minnesota DSRCs were greater than those associated with Pagosa Springs DSRCs for every site except the Whitewater River near Beaver, Minnesota site. Pagosa Springs DSRC models were less accurate than the mean of the measured data at predicting SSC values for one-half of the sites for good/fair stability sites and one-half of the sites for poor stability sites. Relative model biases were calculated and determined to be substantial (greater than 5 percent) for Pagosa Springs and Minnesota models, with Minnesota models having a lower mean model bias. For predicted annual suspended-sediment loads (SSL), the Minnesota DSRC models for good/fair and poor stream stability sites more closely approximated the annual SSLs calculated from the measured data as compared to the Pagosa Springs DSRC model.Results of data analyses indicate that DSRC models developed using data collected in Minnesota were more effective at compensating for differences in individual stream characteristics across a variety of basin sizes and flow regimes than DSRC models developed using data collected for Pagosa Springs, Colorado. Minnesota DSRC models retained a substantial portion of the unique sediment signatures for most rivers, although deviations were observed for streams with limited sediment supply and for rivers in southeastern Minnesota, which had markedly larger regression exponents. Compared to Pagosa Springs DSRC models, Minnesota DSRC models had regression slopes that more closely matched the slopes of site-specific regression models, had greater Nash-Sutcliffe Efficiency values, had lower model biases, and approximated measured annual sediment loads more closely. The results presented in this report indicate that regionally based DSRCs can be used to estimate reasonably accurate values of SSC and bedload.Practitioners are cautioned that DSRC reliability is dependent on representative measures of bankfull streamflow, SSC, and bedload. It is, therefore, important that samples of SSC and bedload, which will be used for estimating SSC and bedload at the bankfull streamflow, are collected over a range of conditions that includes the ascending and descending limbs of the event hydrograph. The use of DSRC models may have substantial limitations for certain conditions. For example, DSRC models should not be used to predict SSC and sediment loads for extreme streamflows, such as those that exceed twice the bankfull streamflow value because this constitutes conditions beyond the realm of current (2016) empirical modeling capability. Also, if relations between SSC and streamflow and between bedload and streamflow are not statistically significant, DSRC models should not be used to predict SSC or bedload, as this could result in large errors. For streams that do not violate these conditions, DSRC estimates of SSC and bedload can be used for stream restoration planning and design, and for estimating annual sediment loads for streams where little or no sediment data are available.
USDA-ARS?s Scientific Manuscript database
Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...
DOT National Transportation Integrated Search
2016-11-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...
DOT National Transportation Integrated Search
2018-04-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...
DOT National Transportation Integrated Search
2017-04-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...
DOT National Transportation Integrated Search
2015-01-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National : Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor : carrier interventions in terms...
DOT National Transportation Integrated Search
2017-04-01
The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...
NASA Astrophysics Data System (ADS)
Gu, Fengshou; Yesilyurt, Isa; Li, Yuhua; Harris, Georgina; Ball, Andrew
2006-08-01
In order to discriminate small changes for early fault diagnosis of rotating machines, condition monitoring demands that the measurement of instantaneous angular speed (IAS) of the machines be as accurate as possible. This paper develops the theoretical basis and practical implementation of IAS data acquisition and IAS estimation when noise influence is included. IAS data is modelled as a frequency modulated signal of which the signal-to-noise ratio can be improved by using a high-resolution encoder. From this signal model and analysis, optimal configurations for IAS data collection are addressed for high accuracy IAS measurement. Simultaneously, a method based on analytic signal concept and fast Fourier transform is also developed for efficient and accurate estimation of IAS. Finally, a fault diagnosis is carried out on an electric induction motor driving system using IAS measurement. The diagnosis results show that using a high-resolution encoder and a long data stream can achieve noise reduction by more than 10 dB in the frequency range of interest, validating the model and algorithm developed. Moreover, the results demonstrate that IAS measurement outperforms conventional vibration in diagnosis of incipient faults of motor rotor bar defects and shaft misalignment.
A novel client service quality measuring model and an eHealthcare mitigating approach.
Cheng, L M; Choi, Wai Ping Choi; Wong, Anita Yiu Ming
2016-07-01
Facing population ageing in Hong Kong, the demand of long-term elderly health care services is increasing. The challenge is to support a good quality service under the constraints faced by recent shortage of nursing and care services professionals without redesigning the work flow operated in the existing elderly health care industries. the existing elderly health care industries. The Total QoS measure based on Finite Capacity Queuing Model is a reliable method and an effective measurement for Quality of services. The value is good for measuring the staffing level and offers a measurement for efficiency enhancement when incorporate new technologies like ICT. The implemented system has improved the Quality of Service by more than 14% and the extra released manpower resource will allow clinical care provider to offer further value added services without actually increasing head count. We have developed a novel Quality of Service measurement for Clinical Care services based on multi-queue using finite capacity queue model M/M/c/K/n and the measurement is useful for estimating the shortage of staff resource in a caring institution. It is essential for future integration with the existing widely used assessment model to develop reliable measuring limits which allow an effective measurement of public fund used in health care industries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2013-09-30
flow models, such as Delft3D, with our developed Boussinesq -type model. The vision of this project is to develop an operational tool for the...situ measurements or large-scale wave models. This information will be used to drive the offshore wave boundary condition. • Execute the Boussinesq ...model to match with the Boussinesq -type theory would be one which can simulate sheared and stratified currents due to large-scale (non-wave) forcings
Error analysis of Dobson spectrophotometer measurements of the total ozone content
NASA Technical Reports Server (NTRS)
Holland, A. C.; Thomas, R. W. L.
1975-01-01
A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.
Satellite rainfall retrieval by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.
1986-01-01
The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.
Optical modeling of stratopheric aerosols - Present status
NASA Technical Reports Server (NTRS)
Rosen, J. M.; Hofmann, D. J.
1986-01-01
A stratospheric aerosol optical model is developed which is based on a size distribution conforming to direct measurements. Additional constraints are consistent with large data sets of independently measured macroscopic aerosol properties such as mass and backscatter. The period under study covers background as well as highly disturbed volcanic conditions and an altitude interval ranging from the tropopause to about 30 km. The predictions of the model are used to form a basis for interpreting and intercomparing several diverse types of stratospheric aerosol measurement.
Control of experimental uncertainties in filtered Rayleigh scattering measurements
NASA Technical Reports Server (NTRS)
Forkey, Joseph N.; Finkelstein, N. D.; Lempert, Walter R.; Miles, Richard B.
1995-01-01
Filtered Rayleigh Scattering is a technique which allows for measurement of velocity, temperature, and pressure in unseeded flows, spatially resolved in 2-dimensions. We present an overview of the major components of a Filtered Rayleigh Scattering system. In particular, we develop and discuss a detailed theoretical model along with associated model parameters and related uncertainties. Based on this model, we then present experimental results for ambient room air and for a Mach 2 free jet, including spatially resolved measurements of velocity, temperature, and pressure.
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Hennegriff, W
2007-01-01
The climatic conditions in Southern Germany have changed noticeably in the 20th century, especially during the last three decades. Both in specific regions and interannually, the trends found exceed the natural margins of deviation previously known from long measurement series for some measured quantities. The mean and also the extreme floods are expected to increase significantly, although the results of the model chain global model-regional climate models-water balance models are still uncertain. As a precaution an adaptation strategy has been developed for the field of flood protection which takes into consideration the possible development for the next decades and also takes into account the uncertainties.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
Yoo, Wucherl; Sim, Alex
2016-06-24
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel
2014-01-01
The Attitudes and Belief Scale-2 (ABS-2: DiGiuseppe, Leaf, Exner, & Robin, 1988. The development of a measure of rational/irrational thinking. Paper presented at the World Congress of Behavior Therapy, Edinburg, Scotland.) is a 72-item self-report measure of evaluative rational and irrational beliefs widely used in Rational Emotive Behavior Therapy research contexts. However, little psychometric evidence exists regarding the measure's underlying factor structure. Furthermore, given the length of the ABS-2 there is a need for an abbreviated version that can be administered when there are time demands on the researcher, such as in clinical settings. This study sought to examine a series of theoretical models hypothesized to represent the latent structure of the ABS-2 within an alternative models framework using traditional confirmatory factor analysis as well as utilizing a bifactor modeling approach. Furthermore, this study also sought to develop a psychometrically sound abbreviated version of the ABS-2. Three hundred and thirteen (N = 313) active emergency service personnel completed the ABS-2. Results indicated that for each model, the application of bifactor modeling procedures improved model fit statistics, and a novel eight-factor intercorrelated solution was identified as the best fitting model of the ABS-2. However, the observed fit indices failed to satisfy commonly accepted standards. A 24-item abbreviated version was thus constructed and an intercorrelated eight-factor solution yielded satisfactory model fit statistics. Current results support the use of a bifactor modeling approach to determining the factor structure of the ABS-2. Furthermore, results provide empirical support for the psychometric properties of the newly developed abbreviated version.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, S. Reynold; Allen, Chris
2009-01-01
The objective of the project is to develop an acoustic modeling capability, based on commercial off-the-shelf software, to be used as a tool for oversight of the future manned Constellation vehicles. The use of such a model will help ensure compliance with acoustic requirements. Also, this project includes modeling validation and development feedback via building physical mockups and conducting acoustic measurements to compare with the predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huerta, Gabriel
The objective of the project is to develop strategies for better representing scientific sensibilities within statistical measures of model skill that then can be used within a Bayesian statistical framework for data-driven climate model development and improved measures of model scientific uncertainty. One of the thorny issues in model evaluation is quantifying the effect of biases on climate projections. While any bias is not desirable, only those biases that affect feedbacks affect scatter in climate projections. The effort at the University of Texas is to analyze previously calculated ensembles of CAM3.1 with perturbed parameters to discover how biases affect projectionsmore » of global warming. The hypothesis is that compensating errors in the control model can be identified by their effect on a combination of processes and that developing metrics that are sensitive to dependencies among state variables would provide a way to select version of climate models that may reduce scatter in climate projections. Gabriel Huerta at the University of New Mexico is responsible for developing statistical methods for evaluating these field dependencies. The UT effort will incorporate these developments into MECS, which is a set of python scripts being developed at the University of Texas for managing the workflow associated with data-driven climate model development over HPC resources. This report reflects the main activities at the University of New Mexico where the PI (Huerta) and the Postdocs (Nosedal, Hattab and Karki) worked on the project.« less
RECENT ENHANCEMENTS TO THE DIETARY EXPOSURE POTENTIAL MODEL
Presentation describes recent enhancements & new applications of the Dietary Exposure Potential Model (DEPM), a model developed to assist in design & interpretation of dietary exposure measurements. Model is an interactive system that provides dietary exposure estimates using dat...
DeMier, R L; Hynan, M T; Hatfield, R F; Varner, M W; Harris, H B; Manniello, R L
2000-01-01
A measurement model of perinatal stressors was first evaluated for reliability and then used to identify risk factors for postnatal emotional distress in high-risk mothers. In Study 1, six measures (gestational age of the baby, birthweight, length of the baby's hospitalization, a postnatal complications rating for the infant, and Apgar scores at 1 and 5 min) were obtained from chart reviews of preterm births at two different hospitals. Confirmatory factor analyses revealed that the six measures could be accounted for by three factors: (a) Infant Maturity, (b) Apgar Ratings, and (c) Complications. In Study 2, a modified measurement model indicated that Infant Maturity and Complications were significant predictors of postnatal emotional distress in an additional sample of mothers. This measurement model may also be useful in predicting (a) other measures of psychological distress in parents, and (b) measures of cognitive and motor development in infants.
Near-infrared diffuse reflection systems for chlorophyll content of tomato leaves measurement
NASA Astrophysics Data System (ADS)
Jiang, Huanyu; Ying, Yibin; Lu, Huishan
2006-10-01
In this study, two measuring systems for chlorophyll content of tomato leaves were developed based on near-infrared spectral techniques. The systems mainly consists of a FT-IR spectrum analyzer, optic fiber diffuses reflection accessories and data card. Diffuse reflectance of intact tomato leaves was measured by an optics fiber optic fiber diffuses reflection accessory and a smart diffuses reflection accessory. Calibration models were developed from spectral and constituent measurements. 90 samples served as the calibration sets and 30 samples served as the validation sets. Partial least squares (PLS) and principal component regression (PCR) technique were used to develop the prediction models by different data preprocessing. The best model for chlorophyll content had a high correlation efficient of 0.9348 and a low standard error of prediction RMSEP of 4.79 when we select full range (12500-4000 cm -1), MSC path length correction method by the log(1/R). The results of this study suggest that FT-NIR method can be feasible to detect chlorophyll content of tomato leaves rapidly and nondestructively.
NASA Technical Reports Server (NTRS)
Mack, R. A.; Wylie, D. P.
1982-01-01
A technique was developed for estimating the condensation rates of convective storms using satellite measurements of cirrus anvil expansion rates and radiosonde measurements of environmental water vapor. Three cases of severe convection in Oklahoma were studied and a diagnostic model was developed for integrating radiosonde data with satellite data. Two methods were used to measure the anvil expansion rates - the expansion of isotherm contours on infrared images, and the divergent motions of small brightness anomalies tracked on the visible images. The differences between the two methods were large as the storms developed, but these differences became small in the latter stage of all three storms. A comparison between the three storms indicated that the available moisture in the lowest levels greatly affected the rain rates of the storms. This was evident from both the measured rain rates of the storms and the condensation rates estimated by the model. The possibility of using this diagnostic model for estimating the intensities of convective storms also is discussed.
Process-based model with flood control measures towards more realistic global flood modeling
NASA Astrophysics Data System (ADS)
Tang, Q.; Zhang, X.; Wang, Y.; Mu, M.; Lv, A.; Li, Z.
2017-12-01
In the profoundly human-influenced era, the Anthropocene, increased amount of land was developed in flood plains and many flood control measures were implemented to protect people and infrastructures placed in the flood-prone areas. These human influences (for example, dams and dykes) have altered peak streamflow and flood risk, and are already an integral part of flood. However, most of the process-based flood models have yet to taken into account the human influences. In this study, we used a hydrological model together with an advanced hydrodynamic model to assess flood risk at the Baiyangdian catchment. The Baiyangdian Lake is the largest shallow freshwater lake in North China, and it was used as a flood storage area in the past. A new development hub for the Beijing-Tianjin-Hebei economic triangle, namely the Xiongan new area, was recently established in the flood-prone area around the lake. The shuttle radar topography mission (SRTM) digital elevation model (DEMs) was used to parameterize the hydrodynamic model simulation, and the inundation estimates were compared with published flood maps and observed inundation area during the extreme historical flood events. A simple scheme was carried out to consider the impacts of flood control measures, including the reservoirs in the headwaters and the dykes to be built. By comparing model simulations with and without the influences of flood control measures, we demonstrated the importance of human influences in altering the inundated area and depth under design flood conditions. Based on the SRTM DEM and dam and reservoir data in the Global Reservoir and Dam (GRanD) database, we further discuss the potential to develop a global flood model with human influences.
Stein, George Juraj; Múcka, Peter; Chmúrny, Rudolf; Hinz, Barbara; Blüthner, Ralph
2007-01-01
For modelling purposes and for evaluation of driver's seat performance in the vertical direction various mechano-mathematical models of the seated human body have been developed and standardized by the ISO. No such models exist hitherto for human body sitting in an upright position in a cushioned seat upper part, used in industrial environment, where the fore-and-aft vibrations play an important role. The interaction with the steering wheel has to be taken into consideration, as well as, the position of the human body upper torso with respect to the cushioned seat back as observed in real driving conditions. This complex problem has to be simplified first to arrive at manageable simpler models, which still reflect the main problem features. In a laboratory study accelerations and forces in x-direction were measured at the seat base during whole-body vibration in the fore-and-aft direction (random signal in the frequency range between 0.3 and 30 Hz, vibration magnitudes 0.28, 0.96, and 2.03 ms(-2) unweighted rms). Thirteen male subjects with body masses between 62.2 and 103.6 kg were chosen for the tests. They sat on a cushioned driver seat with hands on a support and backrest contact in the lumbar region only. Based on these laboratory measurements a linear model of the system-seated human body and cushioned seat in the fore-and-aft direction has been developed. The model accounts for the reaction from the steering wheel. Model parameters have been identified for each subject-measured apparent mass values (modulus and phase). The developed model structure and the averaged parameters can be used for further bio-dynamical research in this field.
Development of a working Hovercraft model
NASA Astrophysics Data System (ADS)
Noor, S. H. Mohamed; Syam, K.; Jaafar, A. A.; Mohamad Sharif, M. F.; Ghazali, M. R.; Ibrahim, W. I.; Atan, M. F.
2016-02-01
This paper presents the development process to fabricate a working hovercraft model. The purpose of this study is to design and investigate of a fully functional hovercraft, based on the studies that had been done. The different designs of hovercraft model had been made and tested but only one of the models is presented in this paper. In this thesis, the weight, the thrust, the lift and the drag force of the model had been measured and the electrical and mechanical parts are also presented. The processing unit of this model is Arduino Uno by using the PSP2 (Playstation 2) as the controller. Since our prototype should be functioning on all kind of earth surface, our model also had been tested in different floor condition. They include water, grass, cement and tile. The Speed of the model is measured in every case as the respond variable, Current (I) as the manipulated variable and Voltage (V) as the constant variable.
Effective Biot theory and its generalization to poroviscoelastic models
NASA Astrophysics Data System (ADS)
Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark
2018-02-01
A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Tests of wildlife habitat models to evaluate oak-mast production
Schroeder, R.L.; Vangilder, L.D.
1997-01-01
We measured oak-mast production and forest structure and composition in the Ozark Mountains of Missouri and tested the accuracy of oak-mast prediction variables from 5 Habitat Suitability Index (HSI) species models. Acorn production was positively associated with several measures of abundance and canopy cover of oak trees, and with an index of mast production for all 5 HSI models. We developed 2 modified oak-mast models, based on inputs related to either oak tree density or oak canopy cover and diversity of oak tree species. The revised models accounted for 22-32% of the variance associated with acorn abundance. Future tests of HSI models should consider: (1) the concept of upper limits imposed by habitat and the effects of nonhabitat factors; (2) the benefits of a top-down approach to model development; and (3) testing models across broad geographic regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosby, W. R.; Jensen, B. A.
2002-05-31
In recent years there has been a trend towards storage of Irradiated Nuclear Fuel (INF) in dry conditions rather than in underwater environments. At the same time, the Department of Energy (DOE) has begun encouraging custodians of INF to perform measurements on INF for which no recent fissile contents measurement data exists. INF, in the form of spent fuel from Experimental Breeder Reactor 2 (EBR-II), has been stored in close-fitting, dry underground storage locations at the Radioactive Scrap and Waste Facility (RSWF) at Argonne National Laboratory-West (ANL-W) for many years. In Fiscal Year 2000, funding was obtained from the DOEmore » Office of Safeguards and Security Technology Development Program to develop and prepare for deployment a Shielded Measurement System (SMS) to perform fissile content measurements on INF stored in the RSWF. The SMS is equipped to lift an INF item out of its storage location, perform scanning neutron coincidence and high-resolution gamma-ray measurements, and restore the item to its storage location. The neutron and gamma-ray measurement results are compared to predictions based on isotope depletion and Monte Carlo neutral-particle transport models to provide confirmation of the accuracy of the models and hence of the fissile material contents of the item as calculated by the same models. This paper describes the SMS and discusses the results of the first calibration and validation measurements performed with the SMS.« less
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B; D'Astous, Jacques L
2013-01-01
Several multisegment foot models have been proposed and some have been used to study foot pathologies. These models have been tested and validated on typically developed populations; however application of such models to feet with significant deformities presents an additional set of challenges. For the first time, in this study, a multisegment foot model is tested for repeatability in a population of children with symptomatic abnormal feet. The results from this population are compared to the same metrics collected from an age matched (8-14 years) typically developing population. The modified Shriners Hospitals for Children, Greenville (mSHCG) foot model was applied to ten typically developing children and eleven children with planovalgus feet by two clinicians. Five subjects in each group were retested by both clinicians after 4-6 weeks. Both intra-clinician and inter-clinician repeatability were evaluated using static and dynamic measures. A plaster mold method was used to quantify variability arising from marker placement error. Dynamic variability was measured by examining trial differences from the same subjects when multiple clinicians carried out the data collection multiple times. For hindfoot and forefoot angles, static and dynamic variability in both groups was found to be less than 4° and 6° respectively. The mSHCG model strategy of minimal reliance on anatomical markers for dynamic measures and inherent flexibility enabled by separate anatomical and technical coordinate systems resulted in a model equally repeatable in typically developing and planovalgus populations. Copyright © 2012 Elsevier B.V. All rights reserved.
A SCR Model Calibration Approach with Spatially Resolved Measurements and NH 3 Storage Distributions
Song, Xiaobo; Parker, Gordon G.; Johnson, John H.; ...
2014-11-27
The selective catalytic reduction (SCR) is a technology used for reducing NO x emissions in the heavy-duty diesel (HDD) engine exhaust. In this study, the spatially resolved capillary inlet infrared spectroscopy (Spaci-IR) technique was used to study the gas concentration and NH 3 storage distributions in a SCR catalyst, and to provide data for developing a SCR model to analyze the axial gaseous concentration and axial distributions of NH 3 storage. A two-site SCR model is described for simulating the reaction mechanisms. The model equations and a calculation method was developed using the Spaci-IR measurements to determine the NH 3more » storage capacity and the relationships between certain kinetic parameters of the model. Moreover, a calibration approach was then applied for tuning the kinetic parameters using the spatial gaseous measurements and calculated NH3 storage as a function of axial position instead of inlet and outlet gaseous concentrations of NO, NO 2, and NH 3. The equations and the approach for determining the NH 3 storage capacity of the catalyst and a method of dividing the NH 3 storage capacity between the two storage sites are presented. It was determined that the kinetic parameters of the adsorption and desorption reactions have to follow certain relationships for the model to simulate the experimental data. Finally, the modeling results served as a basis for developing full model calibrations to SCR lab reactor and engine data and state estimator development as described in the references (Song et al. 2013a, b; Surenahalli et al. 2013).« less
Development of assessment tools to measure organizational support for employee health.
Golaszewski, Thomas; Barr, Donald; Pronk, Nico
2003-01-01
To develop systems that measure and effect organizational support for employee health. Multiple studies and developmental projects were reviewed that show the process of instrument development, metric quality testing, utilization within intervention studies, and prediction modeling efforts. Demographic patterns indicate high support levels and relationships of subsections to various employee health risks. Successes with the initial version have given rise to 2 additional evaluation tools. The availability of these systems illustrates how ecological models can be practically applied. Such efforts contribute to the paradigm shift in worksite health promotion that focuses on the organization as the target of intervention.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Markstrom, S. L.
2016-12-01
The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.
Winfield, Kari A.
2005-01-01
Because characterizing the unsaturated hydraulic properties of sediments over large areas or depths is costly and time consuming, development of models that predict these properties from more easily measured bulk-physical properties is desirable. At the Idaho National Engineering and Environmental Laboratory, the unsaturated zone is composed of thick basalt flow sequences interbedded with thinner sedimentary layers. Determining the unsaturated hydraulic properties of sedimentary layers is one step in understanding water flow and solute transport processes through this complex unsaturated system. Multiple linear regression was used to construct simple property-transfer models for estimating the water-retention curve and saturated hydraulic conductivity of deep sediments at the Idaho National Engineering and Environmental Laboratory. The regression models were developed from 109 core sample subsets with laboratory measurements of hydraulic and bulk-physical properties. The core samples were collected at depths of 9 to 175 meters at two facilities within the southwestern portion of the Idaho National Engineering and Environmental Laboratory-the Radioactive Waste Management Complex, and the Vadose Zone Research Park southwest of the Idaho Nuclear Technology and Engineering Center. Four regression models were developed using bulk-physical property measurements (bulk density, particle density, and particle size) as the potential explanatory variables. Three representations of the particle-size distribution were compared: (1) textural-class percentages (gravel, sand, silt, and clay), (2) geometric statistics (mean and standard deviation), and (3) graphical statistics (median and uniformity coefficient). The four response variables, estimated from linear combinations of the bulk-physical properties, included saturated hydraulic conductivity and three parameters that define the water-retention curve. For each core sample,values of each water-retention parameter were estimated from the appropriate regression equation and used to calculate an estimated water-retention curve. The degree to which the estimated curve approximated the measured curve was quantified using a goodness-of-fit indicator, the root-mean-square error. Comparison of the root-mean-square-error distributions for each alternative particle-size model showed that the estimated water-retention curves were insensitive to the way the particle-size distribution was represented. Bulk density, the median particle diameter, and the uniformity coefficient were chosen as input parameters for the final models. The property-transfer models developed in this study allow easy determination of hydraulic properties without need for their direct measurement. Additionally, the models provide the basis for development of theoretical models that rely on physical relationships between the pore-size distribution and the bulk-physical properties of the media. With this adaptation, the property-transfer models should have greater application throughout the Idaho National Engineering and Environmental Laboratory and other geographic locations.
A new fiber optic sensor for inner surface roughness measurement
NASA Astrophysics Data System (ADS)
Xu, Xiaomei; Liu, Shoubin; Hu, Hong
2009-11-01
In order to measure inner surface roughness of small holes nondestructively, a new fiber optic sensor is researched and developed. Firstly, a new model for surface roughness measurement is proposed, which is based on intensity-modulated fiber optic sensors and scattering modeling of rough surfaces. Secondly, a fiber optical measurement system is designed and set up. Under the help of new techniques, the fiber optic sensor can be miniaturized. Furthermore, the use of micro prism makes the light turn 90 degree, so the inner side surface roughness of small holes can be measured. Thirdly, the fiber optic sensor is gauged by standard surface roughness specimens, and a series of measurement experiments have been done. The measurement results are compared with those obtained by TR220 Surface Roughness Instrument and Form Talysurf Laser 635, and validity of the developed fiber optic sensor is verified. Finally, precision and influence factors of the fiber optic sensor are analyzed.
Dry deposition of reduced and reactive nitrogen: A surrogate surfaces approach
NASA Astrophysics Data System (ADS)
Shahin, Usama Mohammed
Nitrogen deposition constitutes an important component of acidic deposition to terrestrial surfaces. However, deposition flux and ambient concentration measurement methods and are still under development. A new sampler using water as a surrogate surface was developed in the Department of Environmental Engineering at Illinois Institute of Technology. This study investigated nitrate and ammonia dry deposition to the water surface sampler, a Nylasorb filter, a citric acid impregnated filter, and a greased strip on the dry deposition plate. The nitrogen containing species that may be responsible for nitrate dry deposition to the WSS include nitrogen monoxide (NO), nitrogen dioxide (NO2), peroxyacetyl nitrate (PAN), nitrous acid (HNO2), nitric acid (HNO3), and particulate nitrate. The experimental measurements showed that HNO3 and particulate nitrate are the major nitrate contributors to the WSS. Ammonia sources to the water surface are ammonia gas (NH3) and ammonium (NH4+). The experimental results showed that these two species are the sole sources to ammonium deposition. Comparison between the measured deposition velocity of SO2, and HNO3, shows that their dry deposition velocities are statistically the same at the 95% confidence level and NH3 deposition velocity and the water evaporation rate are also the same. It was also shown that the air side MTC of two different compounds were correlated to the square root of the inverse of the molecular weight for compounds. The measured MTC was tested by the application of two models, the resistance model and the water evaporation model. The resistance model prediction of the MTC was very close to the measured value but the evaporation model prediction was not. This result is compatible with the finding of Yi, (1997) who used the same WSS for measurements of SO2. The experimental data collected in this research project was used to develop an empirical model to measure the MTC that is [kl/over D] = 0.0426 ([lv/rho/over /mu])0.8([/mu/over /rho [ D
Feedbacks between Reservoir Operation and Floodplain Development
NASA Astrophysics Data System (ADS)
Wallington, K.; Cai, X.
2017-12-01
The increased connectedness of socioeconomic and natural systems warrants the study of them jointly as Coupled Natural-Human Systems (CNHS) (Liu et al., 2007). One such CNHS given significant attention in recent years has been the coupled sociological-hydrological system of floodplains. Di Baldassarre et al. (2015) developed a model coupling floodplain development and levee heightening, a flood control measure, which demonstrated the "levee effect" and "adaptation effect" seen in observations. Here, we adapt the concepts discussed by Di Baldassarre et al. (2015) and apply them to floodplains in which the primary flood control measure is reservoir storage, rather than levee construction, to study the role of feedbacks between reservoir operation and floodplain development. Specifically, we investigate the feedback between floodplain development and optimal management of trade-offs between flood water conservation and flood control. By coupling a socio-economic model based on that of Di Baldassarre et al. (2015) with a reservoir optimization model based on that discussed in Ding et al. (2017), we show that reservoir operation rules can co-evolve with floodplain development. Furthermore, we intend to demonstrate that the model results are consistent with real-world data for reservoir operating curves and floodplain development. This model will help explain why some reservoirs are currently operated for purposes which they were not originally intended and thus inform reservoir design and construction.
Development of the Artistic Supervision Model Scale (ASMS)
ERIC Educational Resources Information Center
Kapusuzoglu, Saduman; Dilekci, Umit
2017-01-01
The purpose of the study is to develop the Artistic Supervision Model Scale in accordance with the perception of inspectors and the elementary and secondary school teachers on artistic supervision. The lack of a measuring instrument related to the model of artistic supervision in the field of literature reveals the necessity of such study. 290…
A survival model for individual shortleaf pine trees in even-aged natural stands
Thomas B. Lynch; Michael M. Huebschmann; Paul A. Murphy
2000-01-01
A model was developed that predicts the probability of survival for individual shortleaf pine (Pinus echinata Mill.) trees growing in even-aged natural stands. Data for model development were obtained from the first two measurements of permanently established plots located in naturally occurring shortleaf pine forests on the Ouachita and Ozark...
The Teaching-Research Gestalt: The Development of a Discipline-Based Scale
ERIC Educational Resources Information Center
Duff, Angus; Marriott, Neil
2017-01-01
This paper reports the development and empirical testing of a model of the factors that influence the teaching-research nexus. No prior work has attempted to create a measurement model of the nexus. The conceptual model is derived from 19 propositions grouped into four sets of factors relating to: rewards, researchers, curriculum, and students.…
Evangelista, P.; Kumar, S.; Stohlgren, T.J.; Crall, A.W.; Newman, G.J.
2007-01-01
Predictive models of aboveground biomass of nonnative Tamarix ramosissima of various sizes were developed using destructive sampling techniques on 50 individuals and four 100-m2 plots. Each sample was measured for average height (m) of stems and canopy area (m2) prior to cutting, drying, and weighing. Five competing regression models (P < 0.05) were developed to estimate aboveground biomass of T. ramosissima using average height and/or canopy area measurements and were evaluated using Akaike's Information Criterion corrected for small sample size (AICc). Our best model (AICc = -148.69, ??AICc = 0) successfully predicted T. ramosissima aboveground biomass (R2 = 0.97) and used average height and canopy area as predictors. Our 2nd-best model, using the same predictors, was also successful in predicting aboveground biomass (R2 = 0.97, AICc = -131.71, ??AICc = 16.98). A 3rd model demonstrated high correlation between only aboveground biomass and canopy area (R2 = 0.95), while 2 additional models found high correlations between aboveground biomass and average height measurements only (R2 = 0.90 and 0.70, respectively). These models illustrate how simple field measurements, such as height and canopy area, can be used in allometric relationships to accurately predict aboveground biomass of T. ramosissima. Although a correction factor may be necessary for predictions at larger scales, the models presented will prove useful for many research and management initiatives.
Kinetic Model of Growth of Arthropoda Populations
NASA Astrophysics Data System (ADS)
Ershov, Yu. A.; Kuznetsov, M. A.
2018-05-01
Kinetic equations were derived for calculating the growth of crustacean populations ( Crustacea) based on the biological growth model suggested earlier using shrimp ( Caridea) populations as an example. The development cycle of successive stages for populations can be represented in the form of quasi-chemical equations. The kinetic equations that describe the development cycle of crustaceans allow quantitative prediction of the development of populations depending on conditions. In contrast to extrapolation-simulation models, in the developed kinetic model of biological growth the kinetic parameters are the experimental characteristics of population growth. Verification and parametric identification of the developed model on the basis of the experimental data showed agreement with experiment within the error of the measurement technique.
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
Classen, Sherrilene; Winter, Sandra M.; Velozo, Craig A.; Bédard, Michel; Lanford, Desiree N.; Brumback, Babette; Lutz, Barbara J.
2010-01-01
OBJECTIVE We report on item development and validity testing of a self-report older adult safe driving behaviors measure (SDBM). METHOD On the basis of theoretical frameworks (Precede–Proceed Model of Health Promotion, Haddon’s matrix, and Michon’s model), existing driving measures, and previous research and guided by measurement theory, we developed items capturing safe driving behavior. Item development was further informed by focus groups. We established face validity using peer reviewers and content validity using expert raters. RESULTS Peer review indicated acceptable face validity. Initial expert rater review yielded a scale content validity index (CVI) rating of 0.78, with 44 of 60 items rated ≥0.75. Sixteen unacceptable items (≤0.5) required major revision or deletion. The next CVI scale average was 0.84, indicating acceptable content validity. CONCLUSION The SDBM has relevance as a self-report to rate older drivers. Future pilot testing of the SDBM comparing results with on-road testing will define criterion validity. PMID:20437917
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
Jason M. Forthofer; Bret W. Butler; Natalie S. Wagenbrenner
2014-01-01
For this study three types of wind models have been defined for simulating surface wind flow in support of wildland fire management: (1) a uniform wind field (typically acquired from coarse-resolution (,4 km) weather service forecast models); (2) a newly developed mass-conserving model and (3) a newly developed mass and momentumconserving model (referred to as the...
Laser triangulation method for measuring the size of parking claw
NASA Astrophysics Data System (ADS)
Liu, Bo; Zhang, Ming; Pang, Ying
2017-10-01
With the development of science and technology and the maturity of measurement technology, the 3D profile measurement technology has been developed rapidly. Three dimensional measurement technology is widely used in mold manufacturing, industrial inspection, automatic processing and manufacturing, etc. There are many kinds of situations in scientific research and industrial production. It is necessary to transform the original mechanical parts into the 3D data model on the computer quickly and accurately. At present, many methods have been developed to measure the contour size, laser triangulation is one of the most widely used methods.
A Subject-Specific Acoustic Model of the Upper Airway for Snoring Sounds Generation
Saha, Shumit; Bradley, T. Douglas; Taheri, Mahsa; Moussavi, Zahra; Yadollahi, Azadeh
2016-01-01
Monitoring variations in the upper airway narrowing during sleep is invasive and expensive. Since snoring sounds are generated by air turbulence and vibrations of the upper airway due to its narrowing; snoring sounds may be used as a non-invasive technique to assess upper airway narrowing. Our goal was to develop a subject-specific acoustic model of the upper airway to investigate the impacts of upper airway anatomy, e.g. length, wall thickness and cross-sectional area, on snoring sounds features. To have a subject-specific model for snoring generation, we used measurements of the upper airway length, cross-sectional area and wall thickness from every individual to develop the model. To validate the proposed model, in 20 male individuals, intensity and resonant frequencies of modeled snoring sounds were compared with those measured from recorded snoring sounds during sleep. Based on both modeled and measured results, we found the only factor that may positively and significantly contribute to snoring intensity was narrowing in the upper airway. Furthermore, measured resonant frequencies of snoring were inversely correlated with the upper airway length, which is a risk factor for upper airway collapsibility. These results encourage the use of snoring sounds analysis to assess the upper airway anatomy during sleep. PMID:27210576
NASA Astrophysics Data System (ADS)
Shen, Xiaoteng; Maa, Jerome P.-Y.
2017-11-01
In estuaries and coastal waters, floc size and its statistical distributions of cohesive sediments are of primary importance, due to their effects on the settling velocity and thus deposition rates of cohesive aggregates. The development of a robust flocculation model that includes the predictions of floc size distributions (FSDs), however, is still in a research stage. In this study, a one-dimensional longitudinal (1-DL) flocculation model along a streamtube is developed. This model is based on solving the population balance equation to find the FSDs by using the quadrature method of moments. To validate this model, a laboratory experiment is carried out to produce an advection transport-dominant environment in a cylindrical tank. The flow field is generated by a marine pump mounted at the bottom center, with its outlet facing upward. This setup generates an axially symmetric flow which is measured by an acoustic Doppler velocimeter (ADV). The measurement results provide the hydrodynamic input data required for this 1-DL model. The other measurement results, the FSDs, are acquired by using an automatic underwater camera system and the resulting images are analyzed to validate the predicted FSDs. This study shows that the FSDs as well as their representative sizes can be efficiently and reasonably simulated by this 1-DL model.
Bess, John D.; Fujimoto, Nozomu
2014-10-09
Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
A Note on Verification of Computer Simulation Models
ERIC Educational Resources Information Center
Aigner, Dennis J.
1972-01-01
Establishes an argument that questions the validity of one test'' of goodness-of-fit (the extent to which a series of obtained measures agrees with a series of theoretical measures) for the simulated time path of a simple endogenous (internally developed) variable in a simultaneous, perhaps dynamic econometric model. (Author)
Developmental Sentence Scoring for Japanese
ERIC Educational Resources Information Center
Miyata, Susanne; MacWhinney, Brian; Otomo, Kiyoshi; Sirai, Hidetosi; Oshima-Takane, Yuriko; Hirakawa, Makiko; Shirai, Yasuhiro; Sugiura, Masatoshi; Itoh, Keiko
2013-01-01
This article reports on the development and use of the Developmental Sentence Scoring for Japanese (DSSJ), a new morpho-syntactical measure for Japanese constructed after the model of Lee's English Developmental Sentence Scoring model. Using this measure, the authors calculated DSSJ scores for 84 children divided into six age groups between 2;8…
ERIC Educational Resources Information Center
Fischer, Gerhard H.
1987-01-01
A natural parameterization and formalization of the problem of measuring change in dichotomous data is developed. Mathematically-exact definitions of specific objectivity are presented, and the basic structures of the linear logistic test model and the linear logistic model with relaxed assumptions are clarified. (SLD)
Estimating Infiltration Rates for a Loessal Silt Loam Using Soil Properties
M. Dean Knighton
1978-01-01
Soil properties were related to infiltration rates as measured by single-ringsteady-head infiltometers. The properties showing strong simple correlations were identified. Regression models were developed to estimate infiltration rate from several soil properties. The best model gave fair agreement to measured rates at another location.
An Investigation of Calculus Learning Using Factorial Modeling.
ERIC Educational Resources Information Center
Dick, Thomas P.; Balomenos, Richard H.
Structural covariance models that would explain the correlations observed among mathematics achievement and participation measures and related cognitive and affective variables were developed. A sample of college calculus students (N=268; 124 females and 144 males) was administered a battery of cognitive tests (including measures of spatial-visual…
Comparison of modeled traffic exposure zones using on-road air pollution measurements
Modeled traffic data were used to develop traffic exposure zones (TEZs) such as traffic delay, high volume, and transit routes in the Research Triangle area of North Carolina (USA). On-road air pollution measurements of nitrogen dioxide (NO2), carbon monoxide (CO), carbon dioxid...
NASA Astrophysics Data System (ADS)
El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis
2018-02-01
Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.
NASA Astrophysics Data System (ADS)
Janpaule, Inese; Haritonova, Diana; Balodis, Janis; Zarins, Ansis; Silabriedis, Gunars; Kaminskis, Janis
2015-03-01
Development of a digital zenith telescope prototype, improved zenith camera construction and analysis of experimental vertical deflection measurements for the improvement of the Latvian geoid model has been performed at the Institute of Geodesy and Geoinformatics (GGI), University of Latvia. GOCE satellite data was used to compute geoid model for the Riga region, and European gravimetric geoid model EGG97 and 102 data points of GNSS/levelling were used as input data in the calculations of Latvian geoid model.
Lionberger, Megan A.; Schoellhamer, David H.; Shellenbarger, Gregory; Orlando, James L.; Ganju, Neil K.
2007-01-01
This report documents the development and application of a box model to simulate water level, salinity, and temperature of the Alviso Salt Pond Complex in South San Francisco Bay. These ponds were purchased for restoration in 2003 and currently are managed by the U.S. Fish and Wildlife Service to maintain existing wildlife habitat and prevent a build up of salt during the development of a long-term restoration plan. The model was developed for the purpose of aiding pond managers during the current interim management period to achieve these goals. A previously developed box model of a salt pond, SPOOM, which calculates daily pond volume and salinity, was reconfigured to simulate multiple connected ponds and a temperature subroutine was added. The updated model simulates rainfall, evaporation, water flowing between the ponds and the adjacent tidal slough network, and water flowing from one pond to the next by gravity and pumps. Theoretical and measured relations between discharge and corresponding differences in water level are used to simulate most flows between ponds and between ponds and sloughs. The principle of conservation of mass is used to calculate daily pond volume and salinity. The model configuration includes management actions specified in the Interim Stewardship Plan for the ponds. The temperature subroutine calculates hourly net heat transfer to or from a pond resulting in a rise or drop in pond temperature and daily average, minimum, and maximum pond temperatures are recorded. Simulated temperature was compared with hourly measured data from pond 3 of the Napa?Sonoma Salt Pond Complex and monthly measured data from pond A14 of the Alviso Salt-Pond Complex. Comparison showed good agreement of measured and simulated pond temperature on the daily and monthly time scales.
Pitman, A; Jones, D N; Stuart, D; Lloydhope, K; Mallitt, K; O'Rourke, P
2009-10-01
The study reports on the evolution of the Australian radiologist relative value unit (RVU) model of measuring radiologist reporting workloads in teaching hospital departments, and aims to outline a way forward for the development of a broad national safety, quality and performance framework that enables value mapping, measurement and benchmarking. The Radiology International Benchmarking Project of Queensland Health provided a suitable high-level national forum where the existing Pitman-Jones RVU model was applied to contemporaneous data, and its shortcomings and potential avenues for future development were analysed. Application of the Pitman-Jones model to Queensland data and also a Victorian benchmark showed that the original recommendation of 40,000 crude RVU per full-time equivalent consultant radiologist (97-98 baseline level) has risen only moderately, to now lie around 45,000 crude RVU/full-time equivalent. Notwithstanding this, the model has a number of weaknesses and is becoming outdated, as it cannot capture newer time-consuming examinations particularly in CT. A significant re-evaluation of the value of medical imaging is required, and is now occurring. We must rethink how we measure, benchmark, display and continually improve medical imaging safety, quality and performance, throughout the imaging care cycle and beyond. It will be necessary to ensure alignment with patient needs, as well as clinical and organisational objectives. Clear recommendations for the development of an updated national reporting workload RVU system are available, and an opportunity now exists for developing a much broader national model. A more sophisticated and balanced multidimensional safety, quality and performance framework that enables measurement and benchmarking of all important elements of health-care service is needed.
Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed
2018-04-01
The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.
Clinical outcome measurement: Models, theory, psychometrics and practice.
McClimans, Leah; Browne, John; Cano, Stefan
In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term 'psychometrics'. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers. Copyright © 2017. Published by Elsevier Ltd.
Alden, Dana L; Do, Mai Hoa; Bhawuk, Dharm
2004-12-01
Health-care managers are increasingly interested in client perceptions of clinic service quality and satisfaction. While tremendous progress has occurred, additional perspectives on the conceptualization, modeling and measurement of these constructs may further assist health-care managers seeking to provide high-quality care. To that end, this study draws on theories from business and health to develop an integrated model featuring antecedents to and consequences of reproductive health-care client satisfaction. In addition to developing a new model, this study contributes by testing how well Western-based theories of client satisfaction hold in a developing, Asian country. Applied to urban, reproductive health clinic users in Hanoi, Vietnam, test results suggest that hypothesized antecedents such as pre-visit expectations, perceived clinic performance and how much performance exceeds expectations impact client satisfaction. However, the relative importance of these predictors appears to vary depending on a client's level of service-related experience. Finally, higher levels of client satisfaction are positively related to future clinic use intentions. This study demonstrates the value of: (1) incorporating theoretical perspectives from multiple disciplines to model processes underlying health-care satisfaction and (2) field testing those models before implementation. It also furthers research designed to provide health-care managers with actionable measures of the complex processes related to their clients' satisfaction.
Impact Testing of Aluminum 2024 and Titanium 6Al-4V for Material Model Development
NASA Technical Reports Server (NTRS)
Pereira, J. Michael; Revilock, Duane M.; Lerch, Bradley A.; Ruggeri, Charles R.
2013-01-01
One of the difficulties with developing and verifying accurate impact models is that parameters such as high strain rate material properties, failure modes, static properties, and impact test measurements are often obtained from a variety of different sources using different materials, with little control over consistency among the different sources. In addition there is often a lack of quantitative measurements in impact tests to which the models can be compared. To alleviate some of these problems, a project is underway to develop a consistent set of material property, impact test data and failure analysis for a variety of aircraft materials that can be used to develop improved impact failure and deformation models. This project is jointly funded by the NASA Glenn Research Center and the FAA William J. Hughes Technical Center. Unique features of this set of data are that all material property data and impact test data are obtained using identical material, the test methods and procedures are extensively documented and all of the raw data is available. Four parallel efforts are currently underway: Measurement of material deformation and failure response over a wide range of strain rates and temperatures and failure analysis of material property specimens and impact test articles conducted by The Ohio State University; development of improved numerical modeling techniques for deformation and failure conducted by The George Washington University; impact testing of flat panels and substructures conducted by NASA Glenn Research Center. This report describes impact testing which has been done on aluminum (Al) 2024 and titanium (Ti) 6Al-4vanadium (V) sheet and plate samples of different thicknesses and with different types of projectiles, one a regular cylinder and one with a more complex geometry incorporating features representative of a jet engine fan blade. Data from this testing will be used in validating material models developed under this program. The material tests and the material models developed in this program will be published in separate reports.
NASA Astrophysics Data System (ADS)
Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.
2015-02-01
Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.
Wind noise measured at the ground surface.
Yu, Jiao; Raspet, Richard; Webster, Jeremy; Abbott, Johnpaul
2011-02-01
Measurements of the wind noise measured at the ground surface outdoors are analyzed using the mirror flow model of anisotropic turbulence by Kraichnan [J. Acoust. Soc. Am. 28(3), 378-390 (1956)]. Predictions of the resulting behavior of the turbulence spectrum with height are developed, as well as predictions of the turbulence-shear interaction pressure at the surface for different wind velocity profiles and microphone mounting geometries are developed. The theoretical results of the behavior of the velocity spectra with height are compared to measurements to demonstrate the applicability of the mirror flow model to outdoor turbulence. The use of a logarithmic wind velocity profile for analysis is tested using meteorological models for wind velocity profiles under different stability conditions. Next, calculations of the turbulence-shear interaction pressure are compared to flush microphone measurements at the surface and microphone measurements with a foam covering flush with the surface. The measurements underneath the thin layers of foam agree closely with the predictions, indicating that the turbulence-shear interaction pressure is the dominant source of wind noise at the surface. The flush microphones measurements are intermittently larger than the predictions which may indicate other contributions not accounted for by the turbulence-shear interaction pressure.
NASA Technical Reports Server (NTRS)
Bekey, G. A.
1971-01-01
Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.
Measurement of positive direct current corona pulse in coaxial wire-cylinder gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Han, E-mail: hanyin1986@gmail.com; Zhang, Bo, E-mail: shizbcn@mail.tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn
In this paper, a system is designed and developed to measure the positive corona current in coaxial wire-cylinder gaps. The characteristic parameters of corona current pulses, such as the amplitude, rise time, half-wave time, and repetition frequency, are statistically analyzed and a new set of empirical formulas are derived by numerical fitting. The influence of space charges on corona currents is tested by using three corona cages with different radii. A numerical method is used to solve a simplified ion-flow model to explain the influence of space charges. Based on the statistical results, a stochastic model is developed to simulatemore » the corona pulse trains. And this model is verified by comparing the simulated frequency-domain responses with the measured ones.« less
Electromagnetic properties of ice coated surfaces
NASA Technical Reports Server (NTRS)
Dominek, A.; Walton, E.; Wang, N.; Beard, L.
1989-01-01
The electromagnetic scattering from ice coated structures is examined. The influence of ice is shown from a measurement standpoint and related to a simple analytical model. A hardware system for the realistic measurement of ice coated structures is also being developed to use in an existing NASA Lewis icing tunnel. Presently, initial measurements have been performed with a simulated tunnel to aid in the development.
ERIC Educational Resources Information Center
Graham, Karen
2012-01-01
This study attempted development and validation of a measure of "intention to stay in academia" for physician assistant (PA) faculty in order to determine if the construct could be measured in way that had both quantitative and qualitative meaning. Adopting both the methodologic framework of the Rasch model and the theoretical framework…
Engineering technology for networks
NASA Technical Reports Server (NTRS)
Paul, Arthur S.; Benjamin, Norman
1991-01-01
Space Network (SN) modeling and evaluation are presented. The following tasks are included: Network Modeling (developing measures and metrics for SN, modeling of the Network Control Center (NCC), using knowledge acquired from the NCC to model the SNC, and modeling the SN); and Space Network Resource scheduling.
RESULTS FROM THE NORTH AMERICAN MERCURY MODEL INTER-COMPARISON STUDY (NAMMIS)
A North American Mercury Model Intercomparison Study (NAMMIS) has been conducted to build upon the findings from previous mercury model intercomparison in Europe. In the absence of mercury measurement networks sufficient for model evaluation, model developers continue to rely on...
Cho, Sun-Joo; Athay, Michele; Preacher, Kristopher J
2013-05-01
Even though many educational and psychological tests are known to be multidimensional, little research has been done to address how to measure individual differences in change within an item response theory framework. In this paper, we suggest a generalized explanatory longitudinal item response model to measure individual differences in change. New longitudinal models for multidimensional tests and existing models for unidimensional tests are presented within this framework and implemented with software developed for generalized linear models. In addition to the measurement of change, the longitudinal models we present can also be used to explain individual differences in change scores for person groups (e.g., learning disabled students versus non-learning disabled students) and to model differences in item difficulties across item groups (e.g., number operation, measurement, and representation item groups in a mathematics test). An empirical example illustrates the use of the various models for measuring individual differences in change when there are person groups and multiple skill domains which lead to multidimensionality at a time point. © 2012 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Valdes, Raymond
The characterization of thermal barrier coating (TBC) systems is increasingly important because they enable gas turbine engines to operate at high temperatures and efficiency. Phase of photothermal emission analysis (PopTea) has been developed to analyze the thermal behavior of the ceramic top-coat of TBCs, as a nondestructive and noncontact method for measuring thermal diffusivity and thermal conductivity. Most TBC allocations are on actively-cooled high temperature turbine blades, which makes it difficult to precisely model heat transfer in the metallic subsystem. This reduces the ability of rote thermal modeling to reflect the actual physical conditions of the system and can lead to higher uncertainty in measured thermal properties. This dissertation investigates fundamental issues underpinning robust thermal property measurements that are adaptive to non-specific, complex, and evolving system characteristics using the PopTea method. A generic and adaptive subsystem PopTea thermal model was developed to account for complex geometry beyond a well-defined coating and substrate system. Without a priori knowledge of the subsystem characteristics, two different measurement techniques were implemented using the subsystem model. In the first technique, the properties of the subsystem were resolved as part of the PopTea parameter estimation algorithm; and, the second technique independently resolved the subsystem properties using a differential "bare" subsystem. The confidence in thermal properties measured using the generic subsystem model is similar to that from a standard PopTea measurement on a "well-defined" TBC system. Non-systematic bias-error on experimental observations in PopTea measurements due to generic thermal model discrepancies was also mitigated using a regression-based sensitivity analysis. The sensitivity analysis reported measurement uncertainty and was developed into a data reduction method to filter out these "erroneous" observations. It was found that the adverse impact of bias-error can be greatly reduced, leaving measurement observations with only random Gaussian noise in PopTea thermal property measurements. Quantifying the influence of the coating-substrate interface in PopTea measurements is important to resolving the thermal conductivity of the coating. However, the reduced significance of this interface in thicker coating systems can give rise to large uncertainties in thermal conductivity measurements. A first step towards improving PopTea measurements for such circumstances has been taken by implementing absolute temperature measurements using harmonically-sustained two-color pyrometry. Although promising, even small uncertainties in thermal emission observations were found to lead to significant noise in temperature measurements. However, PopTea analysis on bulk graphite samples were able to resolve its thermal conductivity to the expected literature values.
A scheme for parameterizing ice cloud water content in general circulation models
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Donner, Leo J.
1989-01-01
A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.
Emerson, Lingamdenne Paul; Job, Anand; Abraham, Vinod
2013-01-01
Hearing loss is a major handicap in developing countries with paucity of trained audiologists and limited resources. In this pilot study trained community health workers were used to provide comprehensive hearing aid services in the community. One hundred and eleven patients were fitted with semi-digital hearing aid and were evaluated over a period of six months. They were assessed using self-report outcome measure APHAB. Results show that trained CHWs are effective in detecting disabling hearing loss and in providing HAs. APHAB can identify and pick up significant improvements in communication in daily activities and provides a realistic expectation of the benefits of a hearing aid. The model of using trained CHWs to provide rehabilitative services in audiology along with self-report outcome measures can be replicated in other developing countries. PMID:23724277
NASA Astrophysics Data System (ADS)
Bravo, Agustín; Barham, Richard; Ruiz, Mariano; López, Juan Manuel; De Arcas, Guillermo; Alonso, Jesus
2012-12-01
In part I, the feasibility of using three-dimensional (3D) finite elements (FEs) to model the acoustic behaviour of the IEC 60318-1 artificial ear was studied and the numerical approach compared with classical lumped elements modelling. It was shown that by using a more complex acoustic model that took account of thermo-viscous effects, geometric shapes and dimensions, it was possible to develop a realistic model. This model then had clear advantages in comparison with the models based on equivalent circuits using lumped parameters. In fact results from FE modelling produce a better understanding about the physical phenomena produced inside ear simulator couplers, facilitating spatial and temporal visualization of the sound fields produced. The objective of this study (part II) is to extend the investigation by validating the numerical calculations against measurements on an ear simulator conforming to IEC 60318-1. For this purpose, an appropriate commercially available device is taken and a complete 3D FE model developed for it. The numerical model is based on key dimensional data obtained with a non-destructive x-ray inspection technique. Measurements of the acoustic transfer impedance have been carried out on the same device at a national measurement institute using the method embodied in IEC 60318-1. Having accounted for the actual device dimensions, the thermo-viscous effects inside narrow slots and holes and environmental conditions, the results of the numerical modelling were found to be in good agreement with the measured values.
A Conceptual Measurement Model for eHealth Readiness: a Team Based Perspective
Phillips, James; Poon, Simon K.; Yu, Dan; Lam, Mary; Hines, Monique; Brunner, Melissa; Power, Emma; Keep, Melanie; Shaw, Tim; Togher, Leanne
2017-01-01
Despite the shift towards collaborative healthcare and the increase in the use of eHealth technologies, there does not currently exist a model for the measurement of eHealth readiness in interdisciplinary healthcare teams. This research aims to address this gap in the literature through the development of a three phase methodology incorporating qualitative and quantitative methods. We propose a conceptual measurement model consisting of operationalized themes affecting readiness across four factors: (i) Organizational Capabilities, (ii) Team Capabilities, (iii) Patient Capabilities, and (iv) Technology Capabilities. The creation of this model will allow for the measurement of the readiness of interdisciplinary healthcare teams to use eHealth technologies to improve patient outcomes. PMID:29854207
Health literacy and public health: a systematic review and integration of definitions and models.
Sørensen, Kristine; Van den Broucke, Stephan; Fullam, James; Doyle, Gerardine; Pelikan, Jürgen; Slonska, Zofia; Brand, Helmut
2012-01-25
Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings.
Thomas, Hannah J; Scott, James G; Coates, Jason M; Connor, Jason P
2018-05-03
Intervention on adolescent bullying is reliant on valid and reliable measurement of victimization and perpetration experiences across different behavioural expressions. This study developed and validated a survey tool that integrates measurement of both traditional and cyber bullying to test a theoretically driven multi-dimensional model. Adolescents from 10 mainstream secondary schools completed a baseline and follow-up survey (N = 1,217; M age = 14 years; 66.2% male). The Bullying and cyberbullying Scale for Adolescents (BCS-A) developed for this study comprised parallel victimization and perpetration subscales, each with 20 items. Additional measures of bullying (Olweus Global Bullying and the Forms of Bullying Scale [FBS]), as well as measures of internalizing and externalizing problems, school connectedness, social support, and personality, were used to further assess validity. Factor structure was determined, and then, the suitability of items was assessed according to the following criteria: (1) factor interpretability, (2) item correlations, (3) model parsimony, and (4) measurement equivalence across victimization and perpetration experiences. The final models comprised four factors: physical, verbal, relational, and cyber. The final scale was revised to two 13-item subscales. The BCS-A demonstrated acceptable concurrent and convergent validity (internalizing and externalizing problems, school connectedness, social support, and personality), as well as predictive validity over 6 months. The BCS-A has sound psychometric properties. This tool establishes measurement equivalence across types of involvement and behavioural forms common among adolescents. An improved measurement method could add greater rigour to the evaluation of intervention programmes and also enable interventions to be tailored to subscale profiles. © 2018 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe
2017-01-01
Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louge, M. Y.; Jenkins, J. T.
The main objective of this work is to develop probes for local measurements of solid velocity and holdup in dense gas-solid flows. In particular, capacitance probes are designed to measure local, time-dependent particle concentrations. In addition, a new optical fiber probe based on laser-induced-phosphorescence is developed to measure particle velocities. The principles for the capacitance and optical diagnostics were given in our first and second quarterly reports. In this reporting period, we have demonstrated with success the feasibility of the optical fiber probe. Another objective of this work is to develop a model of dense-phase conveying and to test thismore » model in a setup that incorporates our diagnostics. In this period, as a prelude to these modeling efforts scheduled for the third year of the contract, we have carried out additional computer simulations of rapid granular flows to verify the theories of Jenkins and Richman (1988) on the anisotropy of the second moment in simple shear. 2 refs., 5 figs.« less
Models of filter-based particle light absorption measurements
NASA Astrophysics Data System (ADS)
Hamasha, Khadeejeh M.
Light absorption by aerosol is very important in the visible, near UN, and near I.R region of the electromagnetic spectrum. Aerosol particles in the atmosphere have a great influence on the flux of solar energy, and also impact health in a negative sense when they are breathed into lungs. Aerosol absorption measurements are usually performed by filter-based methods that are derived from the change in light transmission through a filter where particles have been deposited. These methods suffer from interference between light-absorbing and light-scattering aerosol components. The Aethalometer is the most commonly used filter-based instrument for aerosol light absorption measurement. This dissertation describes new understanding of aerosol light absorption obtained by the filter method. The theory uses a multiple scattering model for the combination of filter and particle optics. The theory is evaluated using Aethalometer data from laboratory and ambient measurements in comparison with photoacoustic measurements of aerosol light absorption. Two models were developed to calculate aerosol light absorption coefficients from the Aethalometer data, and were compared to the in-situ aerosol light absorption coefficients. The first is an approximate model and the second is a "full" model. In the approximate model two extreme cases of aerosol optics were used to develop a model-based calibration scheme for the 7-wavelength Aethalometer. These cases include those of very strong scattering aerosols (Ammonium sulfate sample) and very absorbing aerosols (kerosene soot sample). The exponential behavior of light absorption in the strong multiple scattering limit is shown to be the square root of the total absorption optical depth rather than linear with optical depth as is commonly assumed with Beer's law. 2-stream radiative transfer theory was used to develop the full model to calculate the aerosol light absorption coefficients from the Aethalometer data. This comprehensive model allows for studying very general cases of particles of various sizes embedded on arbitrary filter media. Application of this model to the Reno Aerosol Optics Study (Laboratory data) shows that the aerosol light absorption coefficients are about half of the Aethalometer attenuation coefficients, and there is a reasonable agreement between the model calculated absorption coefficients at 521 nm and the measured photoacoustic absorption coefficients at 532 nm. For ambient data obtained during the Las Vegas study, it shows that the model absorption coefficients at 521 nm are larger than the photoacoustic coefficients at 532 nm. Use of the 2-stream model shows that particle penetration depth into the filter has a strong influence on the interpretation of filter-based aerosol light absorption measurements. This is likely explanation for the difference found between model results for filter-based aerosol light absorption and those from photoacoustic measurements for ambient and laboratory aerosol.
ERIC Educational Resources Information Center
Page, Jeremy Dale
2010-01-01
The purpose of this study was to examine the relationship between participation in student activism and leadership development among college students. This study applied the social change model of leadership development (SCM) as the theoretical model used to measure socially responsible leadership capacity in students. The study utilized data…
Correction of electronic record for weighing bucket precipitation gauge measurements
USDA-ARS?s Scientific Manuscript database
Electronic sensors generate valuable streams of forcing and validation data for hydrologic models, but are often subject to noise, which must be removed as part of model input and testing database development. We developed Automated Precipitation Correction Program (APCP) for weighting bucket preci...
Measuring Language Dominance and Bilingual Proficiency Development of Tarahumara Children.
ERIC Educational Resources Information Center
Paciotto, Carla
This paper examines the language dominance and oral bilingual proficiency of Tarahumara-Spanish speaking students from Chihuahua, Mexico, within the framework of Cummins' model of bilingual proficiency development. Cummins' model distinguishes between basic interpersonal communicative skills (BICS) and cognitive academic language proficiency…
A physiologically based pharmacokinetic (PBPK) model was developed for the conazole fungicide triadimefon and its primary metabolite, triadimenol. Rat tissue:blood partition coefficients and metabolic constants were measured in vitro for both compounds. Kinetic time course data...
physiologically based pharmacokinetic (PBPK) model was developed for the conazole fungicide triadimefon and its primary metabolite, triadimenol. Rat tissue:blood partition coefficients and metabolic constants were measured in vitro for both compounds. Pharmacokinetic data for par...
Temperature Measurement and Numerical Prediction in Machining Inconel 718
Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-01-01
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Dennon, S. R.
1986-01-01
A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.
Physician outcome measurement: review and proposed model.
Siha, S
1998-01-01
As health care moves from a free-for-service environment to a capitated arena, outcome measurements must change. ABC Children's Medical Center is challenged with developing comprehensive outcome measures for an employed physician group. An extensive literature review validates that physician outcomes must move beyond revenue production and measure all aspects of care delivery. The proposed measurement model for this physician group is a trilogy model. It includes measures of cost, quality, and service. While these measures can be examined separately, it is imperative to understand their integration in determining an organization's competitive advantage. The recommended measurements for the physician group must be consistent with the overall organizational goals. The long-term impact will be better utilization of resources. This will result in the most cost effective, quality care for the health care consumer.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, SShao-sheng R.; Allen, Christopher S.
2009-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.
A Maneuvering Flight Noise Model for Helicopter Mission Planning
NASA Technical Reports Server (NTRS)
Greenwood, Eric; Rau, Robert; May, Benjamin; Hobbs, Christopher
2015-01-01
A new model for estimating the noise radiation during maneuvering flight is developed in this paper. The model applies the Quasi-Static Acoustic Mapping (Q-SAM) method to a database of acoustic spheres generated using the Fundamental Rotorcraft Acoustics Modeling from Experiments (FRAME) technique. A method is developed to generate a realistic flight trajectory from a limited set of waypoints and is used to calculate the quasi-static operating condition and corresponding acoustic sphere for the vehicle throughout the maneuver. By using a previously computed database of acoustic spheres, the acoustic impact of proposed helicopter operations can be rapidly predicted for use in mission-planning. The resulting FRAME-QS model is applied to near-horizon noise measurements collected for the Bell 430 helicopter undergoing transient pitch up and roll maneuvers, with good agreement between the measured data and the FRAME-QS model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faught, A; University of Texas Health Science Center Houston, Graduate School of Biomedical Sciences, Houston, TX; Davidson, S
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Purpose: To commission a multiple-source Monte Carlo model of Elekta linear accelerator beams of nominal energies 6MV and 10MV. Methods: A three source, Monte Carlo model of Elekta 6 and 10MV therapeutic x-ray beams was developed. Energy spectra of two photon sources corresponding to primary photons created in the target and scattered photons originating in the linear accelerator head were determined by an optimization process that fit the relative fluence of 0.25 MeVmore » energy bins to the product of Fatigue-Life and Fermi functions to match calculated percent depth dose (PDD) data with that measured in a water tank for a 10x10cm2 field. Off-axis effects were modeled by a 3rd degree polynomial used to describe the off-axis half-value layer as a function of off-axis angle and fitting the off-axis fluence to a piecewise linear function to match calculated dose profiles with measured dose profiles for a 40×40cm2 field. The model was validated by comparing calculated PDDs and dose profiles for field sizes ranging from 3×3cm2 to 30×30cm2 to those obtained from measurements. A benchmarking study compared calculated data to measurements for IMRT plans delivered to anthropomorphic phantoms. Results: Along the central axis of the beam 99.6% and 99.7% of all data passed the 2%/2mm gamma criterion for 6 and 10MV models, respectively. Dose profiles at depths of dmax, through 25cm agreed with measured data for 99.4% and 99.6% of data tested for 6 and 10MV models, respectively. A comparison of calculated dose to film measurement in a head and neck phantom showed an average of 85.3% and 90.5% of pixels passing a 3%/2mm gamma criterion for 6 and 10MV models respectively. Conclusion: A Monte Carlo multiple-source model for Elekta 6 and 10MV therapeutic x-ray beams has been developed as a quality assurance tool for clinical trials.« less
Yasuda, Tomomi; Yonemura, Seiichiro; Tani, Akira
2012-01-01
Many sensors have to be used simultaneously for multipoint carbon dioxide (CO(2)) observation. All the sensors should be calibrated in advance, but this is a time-consuming process. To seek a simplified calibration method, we used four commercial CO(2) sensor models and characterized their output tendencies against ambient temperature and length of use, in addition to offset characteristics. We used four samples of standard gas with different CO(2) concentrations (0, 407, 1,110, and 1,810 ppm). The outputs of K30 and AN100 models showed linear relationships with temperature and length of use. Calibration coefficients for sensor models were determined using the data from three individual sensors of the same model to minimize the relative RMS error. When the correction was applied to the sensors, the accuracy of measurements improved significantly in the case of the K30 and AN100 units. In particular, in the case of K30 the relative RMS error decreased from 24% to 4%. Hence, we have chosen K30 for developing a portable CO(2) measurement device (10 × 10 × 15 cm, 900 g). Data of CO(2) concentration, measurement time and location, temperature, humidity, and atmospheric pressure can be recorded onto a Secure Digital (SD) memory card. The CO(2) concentration in a high-school lecture room was monitored with this device. The CO(2) data, when corrected for simultaneously measured temperature, water vapor partial pressure, and atmospheric pressure, showed a good agreement with the data measured by a highly accurate CO(2) analyzer, LI-6262. This indicates that acceptable accuracy can be realized using the calibration method developed in this study.
Yasuda, Tomomi; Yonemura, Seiichiro; Tani, Akira
2012-01-01
Many sensors have to be used simultaneously for multipoint carbon dioxide (CO2) observation. All the sensors should be calibrated in advance, but this is a time-consuming process. To seek a simplified calibration method, we used four commercial CO2 sensor models and characterized their output tendencies against ambient temperature and length of use, in addition to offset characteristics. We used four samples of standard gas with different CO2 concentrations (0, 407, 1,110, and 1,810 ppm). The outputs of K30 and AN100 models showed linear relationships with temperature and length of use. Calibration coefficients for sensor models were determined using the data from three individual sensors of the same model to minimize the relative RMS error. When the correction was applied to the sensors, the accuracy of measurements improved significantly in the case of the K30 and AN100 units. In particular, in the case of K30 the relative RMS error decreased from 24% to 4%. Hence, we have chosen K30 for developing a portable CO2 measurement device (10 × 10 × 15 cm, 900 g). Data of CO2 concentration, measurement time and location, temperature, humidity, and atmospheric pressure can be recorded onto a Secure Digital (SD) memory card. The CO2 concentration in a high-school lecture room was monitored with this device. The CO2 data, when corrected for simultaneously measured temperature, water vapor partial pressure, and atmospheric pressure, showed a good agreement with the data measured by a highly accurate CO2 analyzer, LI-6262. This indicates that acceptable accuracy can be realized using the calibration method developed in this study. PMID:22737029
NASA Technical Reports Server (NTRS)
West, Jeff; Strutzenberg, Louise L.; Putnam, Gabriel C.; Liever, Peter A.; Williams, Brandon R.
2012-01-01
This paper presents development efforts to establish modeling capabilities for launch vehicle liftoff acoustics and ignition transient environment predictions. Peak acoustic loads experienced by the launch vehicle occur during liftoff with strong interaction between the vehicle and the launch facility. Acoustic prediction engineering tools based on empirical models are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. Modeling approaches are needed that capture the important details of the plume flow environment including the ignition transient, identify the noise generation sources, and allow assessment of the effects of launch pad geometric details and acoustic mitigation measures such as water injection. This paper presents a status of the CFD tools developed by the MSFC Fluid Dynamics Branch featuring advanced multi-physics modeling capabilities developed towards this goal. Validation and application examples are presented along with an overview of application in the prediction of liftoff environments and the design of targeted mitigation measures such as launch pad configuration and sound suppression water placement.
NASA Astrophysics Data System (ADS)
Casas-Mulet, R.; Alfredsen, K. T.
2016-12-01
The dewatering of salmon spawning redds can lead to early life stages mortality due to hydropeaking operations, with higher impact on the alevins stages as they have lower tolerance to dewatering than the eggs. Targeted flow-related mitigations measures can reduce such mortality, but it is essential to understand how hydropeaking change thermal regimes in rivers and may impact embryo development; only then optimal measures can be implemented at the right development stage. We present a set of experimental approaches and modelling tools for the estimation of hatch and swim-up dates based on water temperature data in the river Lundesokna (Norway). We identified critical periods for gravel-stages survival and through comparing hydropeaking vs unregulated thermal and hydrological regimes, we established potential flow-release measures to minimise mortality. Modelling outcomes were then used assess the cost-efficiency of each measure. The combinations of modelling tools used in this study were overall satisfactory and their application can be useful especially in systems where little field data is available. Targeted measures built on well-informed modelling approaches can be pre-tested based on their efficiency to mitigate dewatering effects vs. the hydropower system capacity to release or conserve water for power production. Overall, environmental flow releases targeting specific ecological objectives can provide better cost-effective options than conventional operational rules complying with general legislation.
Design and Use of Microphone Directional Arrays for Aeroacoustic Measurements
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Brooks, Thomas F.; Hunter, William W., Jr.; Meadows, Kristine R.
1998-01-01
An overview of the development of two microphone directional arrays for aeroacoustic testing is presented. These arrays were specifically developed to measure airframe noise in the NASA Langley Quiet Flow Facility. A large aperture directional array using 35 flush-mounted microphones was constructed to obtain high resolution noise localization maps around airframe models. This array possesses a maximum diagonal aperture size of 34 inches. A unique logarithmic spiral layout design was chosen for the targeted frequency range of 2-30 kHz. Complementing the large array is a small aperture directional array, constructed to obtain spectra and directivity information from regions on the model. This array, possessing 33 microphones with a maximum diagonal aperture size of 7.76 inches, is easily moved about the model in elevation and azimuth. Custom microphone shading algorithms have been developed to provide a frequency- and position-invariant sensing area from 10-40 kHz with an overall targeted frequency range for the array of 5-60 kHz. Both arrays are employed in acoustic measurements of a 6 percent of full scale airframe model consisting of a main element NACA 632-215 wing section with a 30 percent chord half-span flap. Representative data obtained from these measurements is presented, along with details of the array calibration and data post-processing procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stormont, John; Lampe, Brandon; Mills, Melissa
The goal of this project is to improve the understanding of key aspects of the coupled thermal-mechanical-hydrologic response of granular (or crushed) salt used as a seal material for shafts, drifts, and boreholes in mined repositories in salt. The project is organized into three tasks to accomplish this goal: laboratory measurements of granular salt consolidation (Task 1), microstructural observations on consolidated samples (Task 2), and constitutive model development and evaluation (Task 3). Task 1 involves laboratory measurements of salt consolidation along with thermal properties and permeability measurements conducted under a range of temperatures and stresses expected for potential mined repositoriesmore » in salt. Testing focused on the role of moisture, temperature and stress state on the hydrologic (permeability) and thermal properties of consolidating granular salt at high fractional densities. Task 2 consists of microstructural observations made on samples after they have been consolidated to interpret deformation mechanisms and evaluate the ability of the constitutive model to predict operative mechanisms under different conditions. Task 3 concerns the development of the coupled thermal-mechanical-hydrologic constitutive model for granular salt consolidation. The measurements and observations in Tasks 1 and 2 were used to develop a thermal-mechanical constitutive model. Accomplishments and status from each of these efforts is reported in subsequent sections of this report« less
Risbrough, Victoria B; Glenn, Daniel E; Baker, Dewleen G
The use of quantitative, laboratory-based measures of threat in humans for proof-of-concept studies and target development for novel drug discovery has grown tremendously in the last 2 decades. In particular, in the field of posttraumatic stress disorder (PTSD), human models of fear conditioning have been critical in shaping our theoretical understanding of fear processes and importantly, validating findings from animal models of the neural substrates and signaling pathways required for these complex processes. Here, we will review the use of laboratory-based measures of fear processes in humans including cued and contextual conditioning, generalization, extinction, reconsolidation, and reinstatement to develop novel drug treatments for PTSD. We will primarily focus on recent advances in using behavioral and physiological measures of fear, discussing their sensitivity as biobehavioral markers of PTSD symptoms, their response to known and novel PTSD treatments, and in the case of d-cycloserine, how well these findings have translated to outcomes in clinical trials. We will highlight some gaps in the literature and needs for future research, discuss benefits and limitations of these outcome measures in designing proof-of-concept trials, and offer practical guidelines on design and interpretation when using these fear models for drug discovery.
Developing a hydrological model in the absence of field data
NASA Astrophysics Data System (ADS)
Sproles, E. A.; Orrego Nelson, C.; Kerr, T.; Lopez Aspe, D.
2014-12-01
We present two runoff models that use remotely-sensed snow cover products from the Moderate Resolution Imaging Spectrometer (MODIS) as the first order hydrologic input. These simplistic models are the first step in developing an operational model for the Elqui River watershed located in northern Central Chile (30°S). In this semi-arid region, snow and glacier melt are the dominant hydrologic inputs where annual precipitation is limited to three or four winter events. Unfortunately winter access to the Andean Cordillera where snow accumulates is limited. While a monitoring network to measure snow where it accumulates in the upper elevations is under development, management decisions regarding water resources cannot wait. The two models we present differ in structure. The first applies a Monte Carlo approach to determine relationships between lagged changes in monthly snow cover frequency and monthly discharge. The second is a modified degree-day melt model, utilizing the MODIS snow cover product to determine where and when snow melt occurs. These models are not watershed specific and are applicable in other regions where snow dominates hydrologic inputs, but measurements are minimal.
Wu, Huafeng; Mei, Xiaojun; Chen, Xinqiang; Li, Junjun; Wang, Jun; Mohapatra, Prasant
2018-07-01
Maritime search and rescue (MSR) play a significant role in Safety of Life at Sea (SOLAS). However, it suffers from scenarios that the measurement information is inaccurate due to wave shadow effect when utilizing wireless Sensor Network (WSN) technology in MSR. In this paper, we develop a Novel Cooperative Localization Algorithm (NCLA) in MSR by using an enhanced particle filter method to reduce measurement errors on observation model caused by wave shadow effect. First, we take into account the mobility of nodes at sea to develop a motion model-Lagrangian model. Furthermore, we introduce both state model and observation model to constitute a system model for particle filter (PF). To address the impact of the wave shadow effect on the observation model, we develop an optimal parameter derived by Kullback-Leibler divergence (KLD) to mitigate the error. After the optimal parameter is acquired, an improved likelihood function is presented. Finally, the estimated position is acquired. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
Anatomic modeling using 3D printing: quality assurance and optimization.
Leng, Shuai; McGee, Kiaran; Morris, Jonathan; Alexander, Amy; Kuhlmann, Joel; Vrieze, Thomas; McCollough, Cynthia H; Matsumoto, Jane
2017-01-01
The purpose of this study is to provide a framework for the development of a quality assurance (QA) program for use in medical 3D printing applications. An interdisciplinary QA team was built with expertise from all aspects of 3D printing. A systematic QA approach was established to assess the accuracy and precision of each step during the 3D printing process, including: image data acquisition, segmentation and processing, and 3D printing and cleaning. Validation of printed models was performed by qualitative inspection and quantitative measurement. The latter was achieved by scanning the printed model with a high resolution CT scanner to obtain images of the printed model, which were registered to the original patient images and the distance between them was calculated on a point-by-point basis. A phantom-based QA process, with two QA phantoms, was also developed. The phantoms went through the same 3D printing process as that of the patient models to generate printed QA models. Physical measurement, fit tests, and image based measurements were performed to compare the printed 3D model to the original QA phantom, with its known size and shape, providing an end-to-end assessment of errors involved in the complete 3D printing process. Measured differences between the printed model and the original QA phantom ranged from -0.32 mm to 0.13 mm for the line pair pattern. For a radial-ulna patient model, the mean distance between the original data set and the scanned printed model was -0.12 mm (ranging from -0.57 to 0.34 mm), with a standard deviation of 0.17 mm. A comprehensive QA process from image acquisition to completed model has been developed. Such a program is essential to ensure the required accuracy of 3D printed models for medical applications.
Zebrafish model systems for developmental neurobehavioral toxicology.
Bailey, Jordan; Oliveri, Anthony; Levin, Edward D
2013-03-01
Zebrafish offer many advantages that complement classic mammalian models for the study of normal development as well as for the teratogenic effects of exposure to hazardous compounds. The clear chorion and embryo of the zebrafish allow for continuous visualization of the anatomical changes associated with development, which, along with short maturation times and the capability of complex behavior, makes this model particularly useful for measuring changes to the developing nervous system. Moreover, the rich array of developmental, behavioral, and molecular benefits offered by the zebrafish have contributed to an increasing demand for the use of zebrafish in behavioral teratology. Essential for this endeavor has been the development of a battery of tests to evaluate a spectrum of behavior in zebrafish. Measures of sensorimotor plasticity, emotional function, cognition and social interaction have been used to characterize the persisting adverse effects of developmental exposure to a variety of chemicals including therapeutic drugs, drugs of abuse and environmental toxicants. In this review, we present and discuss such tests and data from a range of developmental neurobehavioral toxicology studies using zebrafish as a model. Zebrafish provide a key intermediate model between high throughput in vitro screens and the classic mammalian models as they have the accessibility of in vitro models and the complex functional capabilities of mammalian models. Copyright © 2013 Wiley Periodicals, Inc.
Zebrafish Model Systems for Developmental Neurobehavioral Toxicology
Bailey, Jordan; Oliveri, Anthony; Levin, Edward D.
2014-01-01
Zebrafish offer many advantages that complement classic mammalian models for the study of normal development as well as for the teratogenic effects of exposure to hazardous compounds. The clear chorion and embryo of the zebrafish allow for continuous visualization of the anatomical changes associated with development, which, along with short maturation times and the capability of complex behavior, makes this model particularly useful for measuring changes to the developing nervous system. Moreover, the rich array of developmental, behavioral, and molecular benefits offered by the zebrafish have contributed to an increasing demand for the use of zebrafish in behavioral teratology. Essential for this endeavor has been the development of a battery of tests to evaluate a spectrum of behavior in zebrafish. Measures of sensorimotor plasticity, emotional function, cognition and social interaction have been used to characterize the persisting adverse effects of developmental exposure to a variety of chemicals including therapeutic drugs, drugs of abuse and environmental toxicants. In this review, we present and discuss such tests and data from a range of developmental neurobehavioral toxicology studies using zebrafish as a model. Zebrafish provide a key intermediate model between high throughput in vitro screens and the classic mammalian models as they have the accessibility of in vitro models and the complex functional capabilities of mammalian models. PMID:23723169
NASA Astrophysics Data System (ADS)
Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé
2014-05-01
Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.
Models, Measurements, and Local Decisions: Assessing and ...
This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include either exposure or emissions reduction, and a host of stakeholders, including residents, academics, NGOs, local and federal agencies. This presentation includes results from the C-PORT modeling system, and from a citizen science project from the local area. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
NASA Astrophysics Data System (ADS)
Ylilammi, Markku; Ylivaara, Oili M. E.; Puurunen, Riikka L.
2018-05-01
The conformality of thin films grown by atomic layer deposition (ALD) is studied using all-silicon test structures with long narrow lateral channels. A diffusion model, developed in this work, is used for studying the propagation of ALD growth in narrow channels. The diffusion model takes into account the gas transportation at low pressures, the dynamic Langmuir adsorption model for the film growth and the effect of channel narrowing due to film growth. The film growth is calculated by solving the diffusion equation with surface reactions. An efficient analytic approximate solution of the diffusion equation is developed for fitting the model to the measured thickness profile. The fitting gives the equilibrium constant of adsorption and the sticking coefficient. This model and Gordon's plug flow model are compared. The simulations predict the experimental measurement results quite well for Al2O3 and TiO2 ALD processes.
NASA Astrophysics Data System (ADS)
Letu, H.; Nagao, T. M.; Nakajima, T. Y.; Ishimoto, H.; Riedi, J.; Shang, H.
2017-12-01
Ice cloud property product from satellite measurements is applicable in climate change study, numerical weather prediction, as well as atmospheric study. Ishimoto et al., (2010) and Letu et al., (2016) developed a single scattering property of the highly irregular ice particle model, called the Voronoi model for developing ice cloud product of the GCOM-C satellite program. It is investigated that Voronoi model has a good performance on retrieval of the ice cloud properties by comparing it with other well-known scattering models. Cloud property algorithm (Nakajima et al., 1995, Ishida and Nakajima., 2009, Ishimoto et al., 2009, Letu et al., 2012, 2014, 2016) of the GCOM-C satellite program is improved to produce the Himawari-8/AHI cloud products based on the variation of the solar zenith angle. Himawari-8 is the new-generational geostationary meteorological satellite, which is successfully launched by the Japan Meteorological Agency (JMA) on 7 October 2014. In this study, ice cloud optical and microphysical properties are simulated from RSTAR radiative transfer code by using various model. Scattering property of the Voronoi model is investigated for developing the AHI ice cloud products. Furthermore, optical and microphysical properties of the ice clouds are retrieved from Himawari-8/AHI satellite measurements. Finally, retrieval results from Himawari-8/AHI are compared to MODIS-C6 cloud property products for validation of the AHI cloud products.
A kinetic energy model of two-vehicle crash injury severity.
Sobhani, Amir; Young, William; Logan, David; Bahrololoom, Sareh
2011-05-01
An important part of any model of vehicle crashes is the development of a procedure to estimate crash injury severity. After reviewing existing models of crash severity, this paper outlines the development of a modelling approach aimed at measuring the injury severity of people in two-vehicle road crashes. This model can be incorporated into a discrete event traffic simulation model, using simulation model outputs as its input. The model can then serve as an integral part of a simulation model estimating the crash potential of components of the traffic system. The model is developed using Newtonian Mechanics and Generalised Linear Regression. The factors contributing to the speed change (ΔV(s)) of a subject vehicle are identified using the law of conservation of momentum. A Log-Gamma regression model is fitted to measure speed change (ΔV(s)) of the subject vehicle based on the identified crash characteristics. The kinetic energy applied to the subject vehicle is calculated by the model, which in turn uses a Log-Gamma Regression Model to estimate the Injury Severity Score of the crash from the calculated kinetic energy, crash impact type, presence of airbag and/or seat belt and occupant age. Copyright © 2010 Elsevier Ltd. All rights reserved.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Development and Validation of an NPSS Model of a Small Turbojet Engine
NASA Astrophysics Data System (ADS)
Vannoy, Stephen Michael
Recent studies have shown that integrated gas turbine engine (GT)/solid oxide fuel cell (SOFC) systems for combined propulsion and power on aircraft offer a promising method for more efficient onboard electrical power generation. However, it appears that nobody has actually attempted to construct a hybrid GT/SOFC prototype for combined propulsion and electrical power generation. This thesis contributes to this ambition by developing an experimentally validated thermodynamic model of a small gas turbine (˜230 N thrust) platform for a bench-scale GT/SOFC system. The thermodynamic model is implemented in a NASA-developed software environment called Numerical Propulsion System Simulation (NPSS). An indoor test facility was constructed to measure the engine's performance parameters: thrust, air flow rate, fuel flow rate, engine speed (RPM), and all axial stage stagnation temperatures and pressures. The NPSS model predictions are compared to the measured performance parameters for steady state engine operation.
Fluid Flow and Solidification Under Combined Action of Magnetic Fields and Microgravity
NASA Technical Reports Server (NTRS)
Li, B. Q.; Shu, Y.; Li, K.; deGroh, H. C.
2002-01-01
Mathematical models, both 2-D and 3-D, are developed to represent g-jitter induced fluid flows and their effects on solidification under combined action of magnetic fields and microgravity. The numerical model development is based on the finite element solution of governing equations describing the transient g-jitter driven fluid flows, heat transfer and solutal transport during crystal growth with and without an applied magnetic field in space vehicles. To validate the model predictions, a ground-based g-jitter simulator is developed using the oscillating wall temperatures where timely oscillating fluid flows are measured using a laser PIV system. The measurements are compared well with numerical results obtained from the numerical models. Results show that a combined action derived from magnetic damping and microgravity can be an effective means to control the melt flow and solutal transport in space single crystal growth systems.
Barlow, Paul M.; Dickerman, David C.
2001-01-01
This report describes the development, application, and evaluation of numerical-simulation and conjunctive-management models of the Hunt-Annaquatucket-Pettaquamscutt stream-aquifer system in central Rhode Island. Steady-state transient numerical models were developed to improve the understanding of the hydrologic budget of the system, the interaction of ground-water and surface-water components of the system, and the contributing areas and sources of water to supply wells in the system. The numerical models were developed and calibrated on the basis of hydrologic data collected during this and previous investigations. These data include lithologic information for the aquifer; hydraulic properties of aquifer and streambed materials; recharge to the aquifer; water levels measured in wells, ponds, and streambed piezometers; streamflow measurements for various streams within the system; and ground-water withdrawal rates from, and wastewater discharge to, the aquifer.
Zhou, Haiying; Purdie, Jennifer; Wang, Tongtong; Ouyang, Anli
2010-01-01
The number of therapeutic proteins produced by cell culture in the pharmaceutical industry continues to increase. During the early stages of manufacturing process development, hundreds of clones and various cell culture conditions are evaluated to develop a robust process to identify and select cell lines with high productivity. It is highly desirable to establish a high throughput system to accelerate process development and reduce cost. Multiwell plates and shake flasks are widely used in the industry as the scale down model for large-scale bioreactors. However, one of the limitations of these two systems is the inability to measure and control pH in a high throughput manner. As pH is an important process parameter for cell culture, this could limit the applications of these scale down model vessels. An economical, rapid, and robust pH measurement method was developed at Eli Lilly and Company by employing SNARF-4F 5-(-and 6)-carboxylic acid. The method demonstrated the ability to measure the pH values of cell culture samples in a high throughput manner. Based upon the chemical equilibrium of CO(2), HCO(3)(-), and the buffer system, i.e., HEPES, we established a mathematical model to regulate pH in multiwell plates and shake flasks. The model calculates the required %CO(2) from the incubator and the amount of sodium bicarbonate to be added to adjust pH to a preset value. The model was validated by experimental data, and pH was accurately regulated by this method. The feasibility of studying the pH effect on cell culture in 96-well plates and shake flasks was also demonstrated in this study. This work shed light on mini-bioreactor scale down model construction and paved the way for cell culture process development to improve productivity or product quality using high throughput systems. Copyright 2009 American Institute of Chemical Engineers
2012-01-01
If public health agencies are to fulfill their overall mission, they need to have defined measurable targets and should structure services to reach these targets, rather than offer a combination of ill-targeted programs. In order to do this, it is essential that there be a clear definition of what public health should do- a definition that does not ebb and flow based upon the prevailing political winds, but rather is based upon professional standards and measurements. The establishment of the Essential Public Health Services framework in the U.S.A. was a major move in that direction, and the model, or revisions of the model, have been adopted beyond the borders of the U.S. This article reviews the U.S. public health system, the needs and processes which brought about the development of the 10 Essential Public Health Services (EPHS), and historical and contemporary applications of the model. It highlights the value of establishing a common delineation of public health activities such as those contained in the EPHS, and explores the validity of using the same process in other countries through a discussion of the development in Israel of a similar model, the 10 Public Health Essential Functions (PHEF), that describes the activities of Israel’s public health system. The use of the same process and framework to develop similar yet distinct frameworks suggests that the process has wide applicability, and may be beneficial to any public health system. Once a model is developed, it can be used to measure public health performance and improve the quality of services delivered through the development of standards and measures based upon the model, which could, ultimately, improve the health of the communities that depend upon public health agencies to protect their well-being. PMID:23181452
Measuring housing quality in the absence of a monetized real estate market.
Rindfuss, Ronald R; Piotrowski, Martin; Thongthai, Varachai; Prasartkul, Pramote
2007-03-01
Measuring housing quality or value or both has been a weak component of demographic and development research in less developed countries that lack an active real estate (housing) market. We describe a new method based on a standardized subjective rating process. It is designed to be used in settings that do not have an active, monetized housing market. The method is applied in an ongoing longitudinal study in north-east Thailand and could be straightforwardly used in many other settings. We develop a conceptual model of the process whereby households come to reside in high-quality or low-quality housing units. We use this theoretical model in conjunction with longitudinal data to show that the new method of measuring housing quality behaves as theoretically expected, thus providing evidence of face validity.
NASA Astrophysics Data System (ADS)
Bhakti, Satria Seto; Samsudin, Achmad; Chandra, Didi Teguh; Siahaan, Parsaoran
2017-05-01
The aim of research is developing multiple-choices test items as tools for measuring the scientific of generic skills on solar system. To achieve the aim that the researchers used the ADDIE model consisting Of: Analyzing, Design, Development, Implementation, dan Evaluation, all of this as a method research. While The scientific of generic skills limited research to five indicator including: (1) indirect observation, (2) awareness of the scale, (3) inference logic, (4) a causal relation, and (5) mathematical modeling. The participants are 32 students at one of junior high schools in Bandung. The result shown that multiple-choices that are constructed test items have been declared valid by the expert validator, and after the tests show that the matter of developing multiple-choices test items be able to measuring the scientific of generic skills on solar system.
Development of an Austenitization Kinetics Model for 22MnB5 Steel
NASA Astrophysics Data System (ADS)
Di Ciano, M.; Field, N.; Wells, M. A.; Daun, K. J.
2018-03-01
This paper presents a first-order austenitization kinetics model for 22MnB5 steel, commonly used in hot forming die quenching. Model parameters are derived from constant heating rate dilatometry measurements. Vickers hardness measurements made on coupons that were quenched at intermediate stages of the process were used to verify the model, and the Ac 1 and Ac 3 temperatures inferred from dilatometry are consistent with correlations found in the literature. The austenitization model was extended to consider non-constant heating rates typical of industrial furnaces and again showed reasonable agreement between predictions and measurements. Finally, the model is used to predict latent heat evolution during industrial heating and is shown to be consistent with values inferred from thermocouple measurements of furnace-heated 22MnB5 coupons reported in the literature.
Modeling strength data for CREW CHIEF
NASA Technical Reports Server (NTRS)
Mcdaniel, Joe W.
1990-01-01
The Air Force has developed CREW CHIEF, a computer-aided design (CAD) tool for simulating and evaluating aircraft maintenance to determine if the required activities are feasible. CREW CHIEF gives the designer the ability to simulate maintenance activities with respect to reach, accessibility, strength, hand tool operation, and materials handling. While developing the CREW CHIEF, extensive research was performed to describe workers strength capabilities for using hand tools and manual handling of objects. More than 100,000 strength measures were collected and modeled for CREW CHIEF. These measures involved both male and female subjects in the 12 maintenance postures included in CREW CHIEF. The data collection and modeling effort are described.
NASA Astrophysics Data System (ADS)
Samanta, Suman; Patra, Pulak Kumar; Banerjee, Saon; Narsimhaiah, Lakshmi; Sarath Chandran, M. A.; Vijaya Kumar, P.; Bandyopadhyay, Sanjib
2018-06-01
In developing countries like India, global solar radiation (GSR) is measured at very few locations due to non-availability of radiation measuring instruments. To overcome the inadequacy of GSR measurements, scientists developed many empirical models to estimate location-wise GSR. In the present study, three simple forms of Angstrom equation [Angstrom-Prescott (A-P), Ogelman, and Bahel] were used to estimate GSR at six geographically and climatologically different locations across India with an objective to find out a set of common constants usable for whole country. Results showed that GSR values varied from 9.86 to 24.85 MJ m-2 day-1 for different stations. It was also observed that A-P model showed smaller errors than Ogelman and Bahel models. All the models well estimated GSR, as the 1:1 line between measured and estimated values showed Nash-Sutcliffe efficiency (NSE) values ≥ 0.81 for all locations. Measured data of GSR pooled over six selected locations was analyzed to obtain a new set of constants for A-P equation which can be applicable throughout the country. The set of constants (a = 0.29 and b = 0.40) was named as "One India One Constant (OIOC)," and the model was named as "MOIOC." Furthermore, the developed constants are validated statistically for another six locations of India and produce close estimation. High R 2 values (≥ 76%) along with low mean bias error (MBE) ranging from - 0.64 to 0.05 MJ m-2 day-1 revealed that the new constants are able to predict GSR with lesser percentage of error.
Scattering measurements on natural and model trees
NASA Technical Reports Server (NTRS)
Rogers, James C.; Lee, Sung M.
1990-01-01
The acoustical back scattering from a simple scale model of a tree has been experimentally measured. The model consisted of a trunk and six limbs, each with 4 branches; no foliage or twigs were included. The data from the anechoic chamber measurements were then mathematically combined to construct the effective back scattering from groups of trees. Also, initial measurements have been conducted out-of-doors on a single tree in an open field in order to characterize its acoustic scattering as a function of azimuth angle. These measurements were performed in the spring, prior to leaf development. The data support a statistical model of forest scattering; the scattered signal spectrum is highly irregular but with a remarkable general resemblance to the incident signal spectrum. Also, the scattered signal's spectra showed little dependence upon scattering angle.
NASA Astrophysics Data System (ADS)
Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.
2017-08-01
The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models
ERIC Educational Resources Information Center
Chapman, Rebekah; Buckley, Lisa; Sheehan, Mary
2011-01-01
The Extended Adolescent Injury Checklist (E-AIC), a self-report measure of injury based on the model of the Adolescent Injury Checklist (AIC), was developed for use in the evaluation of school-based interventions. The three stages of this development involved focus groups with adolescents and consultations with medical staff, pilot testing of the…
Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu
2018-05-01
Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution outperformed all other models (P < 0.001) with the mean absolute discrepancy of 0.62% and maximum discrepancy of 3.17% between the measured and predicted OFs. The OFs showed a small dependence on gantry angle for small and deep options while they were constant for large options. The OF decreased by 3%-4% as the field radius was reduced to 2.5 cm. Machine learning methods can be used to predict OF for double-scatter proton machines with greater prediction accuracy than the most popular semi-empirical prediction model. By incorporating the gantry angle dependence and field size dependence, the machine learning-based methods can be used for a sanity check of OF measurements and bears the potential to eliminate the time-consuming patient-specific OF measurements. © 2018 American Association of Physicists in Medicine.
Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool
The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...
Multiband radar characterization of forest biomes
NASA Technical Reports Server (NTRS)
Dobson, M. Craig; Ulaby, Fawwaz T.
1990-01-01
The utility of airborne and orbital SAR in classification, assessment, and monitoring of forest biomes is investigated through analysis of orbital synthetic aperature radar (SAR) and multifrequency and multipolarized airborne SAR imagery relying on image tone and texture. Preliminary airborne SAR experiments and truck-mounted scatterometer observations demonstrated that the three dimensional structural complexity of a forest, and the various scales of temporal dynamics in the microwave dielectric properties of both trees and the underlying substrate would severely limit empirical or semi-empirical approaches. As a consequence, it became necessary to develop a more profound understanding of the electromagnetic properties of a forest scene and their temporal dynamics through controlled experimentation coupled with theoretical development and verification. The concatenation of various models into a physically-based composite model treating the entire forest scene became the major objective of the study as this is the key to development of a series of robust retrieval algorithms for forest biophysical properties. In order to verify the performance of the component elements of the composite model, a series of controlled laboratory and field experiments were undertaken to: (1) develop techniques to measure the microwave dielectric properties of vegetation; (2) relate the microwave dielectric properties of vegetation to more readily measured characteristics such as density and moisture content; (3) calculate the radar cross-section of leaves, and cylinders; (4) improve backscatter models for rough surfaces; and (5) relate attenuation and phase delays during propagation through canopies to canopy properties. These modeling efforts, as validated by the measurements, were incorporated within a larger model known as the Michigan Microwave Canopy Scattering (MIMICS) Model.
Emission of pesticides into the air
Van Den, Berg; Kubiak, R.; Benjey, W.G.; Majewski, M.S.; Yates, S.R.; Reeves, G.L.; Smelt, J.H.; Van Der Linden, A. M. A.
1999-01-01
During and after the application of a pesticide in agriculture, a substantial fraction of the dosage may enter the atmosphere and be transported over varying distances downwind of the target. The rate and extent of the emission during application, predominantly as spray particle drift, depends primarily on the application method (equipment and technique), the formulation and environmental conditions, whereas the emission after application depends primarily on the properties of the pesticide, soils, crops and environmental conditions. The fraction of the dosage that misses the target area may be high in some cases and more experimental data on this loss term are needed for various application types and weather conditions. Such data are necessary to test spray drift models, and for further model development and verification as well. Following application, the emission of soil fumigants and soil incorporated pesticides into the air can be measured and computed with reasonable accuracy, but further model development is needed to improve the reliability of the model predictions. For soil surface applied pesticides reliable measurement methods are available, but there is not yet a reliable model. Further model development is required which must be verified by field experiments. Few data are available on pesticide volatilization from plants and more field experiments are also needed to study the fate processes on the plants. Once this information is available, a model needs to be developed to predict the volatilization of pesticides from plants, which, again, should be verified with field measurements. For regional emission estimates, a link between data on the temporal and spatial pesticide use and a geographical information system for crops and soils with their characteristics is needed.
ERIC Educational Resources Information Center
Wright, John C.; And Others
A conceptual model of how children process televised information was developed with the goal of identifying those parameters of the process that are both measurable and manipulable in research settings. The model presented accommodates the nature of information processing both by the child and by the presentation by the medium. Presentation is…
ERIC Educational Resources Information Center
Roduta Roberts, Mary; Alves, Cecilia B.; Chu, Man-Wai; Thompson, Margaret; Bahry, Louise M.; Gotzmann, Andrea
2014-01-01
The purpose of this study was to evaluate the adequacy of three cognitive models, one developed by content experts and two generated from student verbal reports for explaining examinee performance on a grade 3 diagnostic mathematics test. For this study, the items were developed to directly measure the attributes in the cognitive model. The…
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Allen, Christopher; Chu, S. Reynold
2008-01-01
The objective of the project is to develop an acoustic modeling capability, based on commercial off-the-shelf software, to be used as a tool for oversight of the future manned Constellation vehicles to ensure compliance with acoustic requirements and thus provide a safe and habitable acoustic environment for the crews, and to validate developed models via building physical mockups and conducting acoustic measurements.
Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)
NASA Astrophysics Data System (ADS)
Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.
2017-12-01
The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.