33 CFR 385.33 - Revisions to models and analytical tools.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Management District, and other non-Federal sponsors shall rely on the best available science including models..., and assessment of projects. The selection of models and analytical tools shall be done in consultation... system-wide simulation models and analytical tools used in the evaluation and assessment of projects, and...
Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.
Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L
2013-01-01
Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.
Source-term development for a contaminant plume for use by multimedia risk assessment models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.
1999-12-01
Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.
Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U
2015-05-01
The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Model and Analytic Processes for Export License Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.
2011-09-29
This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determinemore » which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An approach to developing testable hypotheses for the macro-level assessment methodologies is provided. The outcome of this works suggests that we should develop a Bayes Net for micro-level analysis and continue to focus on Bayes Net, System Dynamics and Economic Input/Output models for assessing macro-level problems. Simultaneously, we need to develop metrics for assessing intent in export control, including the risks and consequences associated with all aspects of export control.« less
Useful measures and models for analytical quality management in medical laboratories.
Westgard, James O
2016-02-01
The 2014 Milan Conference "Defining analytical performance goals 15 years after the Stockholm Conference" initiated a new discussion of issues concerning goals for precision, trueness or bias, total analytical error (TAE), and measurement uncertainty (MU). Goal-setting models are critical for analytical quality management, along with error models, quality-assessment models, quality-planning models, as well as comprehensive models for quality management systems. There are also critical underlying issues, such as an emphasis on MU to the possible exclusion of TAE and a corresponding preference for separate precision and bias goals instead of a combined total error goal. This opinion recommends careful consideration of the differences in the concepts of accuracy and traceability and the appropriateness of different measures, particularly TAE as a measure of accuracy and MU as a measure of traceability. TAE is essential to manage quality within a medical laboratory and MU and trueness are essential to achieve comparability of results across laboratories. With this perspective, laboratory scientists can better understand the many measures and models needed for analytical quality management and assess their usefulness for practical applications in medical laboratories.
Over the last 10 years the EPA has invested in analytic elements as a computational method used in public domain software supporting capture zone delineation for source water assessments and wellhead protection. The current release is called WhAEM2000 (wellhead analytic element ...
NASA Technical Reports Server (NTRS)
Wilson, Timmy R.; Beech, Geoffrey; Johnston, Ian
2009-01-01
The NESC Assessment Team reviewed a computer simulation of the LC-39 External Tank (ET) GH2 Vent Umbilical system developed by United Space Alliance (USA) for the Space Shuttle Program (SSP) and designated KSC Analytical Tool ID 451 (KSC AT-451). The team verified that the vent arm kinematics were correctly modeled, but noted that there were relevant system sensitivities. Also, the structural stiffness used in the math model varied somewhat from the analytic calculations. Results of the NESC assessment were communicated to the model developers.
ERIC Educational Resources Information Center
Shindler, John; Taylor, Clint; Cadenas, Herminia; Jones, Albert
This study was a pilot effort to examine the efficacy of an analytic trait scale school climate assessment instrument and democratic change system in two urban high schools. Pilot study results indicate that the instrument shows promising soundness in that it exhibited high levels of validity and reliability. In addition, the analytic trait format…
Analytical model for screening potential CO2 repositories
Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.
2011-01-01
Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.
Analytical solutions describing the time-dependent DNAPL source-zone mass and contaminant discharge rate are used as a flux-boundary condition in a semi-analytical contaminant transport model. These analytical solutions assume a power relationship between the flow-averaged sourc...
A Meta-Analytic Investigation of Fiedler's Contingency Model of Leadership Effectiveness.
ERIC Educational Resources Information Center
Strube, Michael J.; Garcia, Joseph E.
According to Fiedler's Contingency Model of Leadership Effectiveness, group performance is a function of the leader-situation interaction. A review of past validations has found several problems associated with the model. Meta-analytic techniques were applied to the Contingency Model in order to assess the validation evidence quantitatively. The…
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Assessing and reducing hydrogeologic model uncertainty
USDA-ARS?s Scientific Manuscript database
NRC is sponsoring research that couples model abstraction techniques with model uncertainty assessment methods. Insights and information from this program will be useful in decision making by NRC staff, licensees and stakeholders in their assessment of subsurface radionuclide transport. All analytic...
2014-09-01
of the BRDF for the Body and Panel. In order to provide a continuously updated baseline, the Photometry Model application is performed using a...brightness to its predicted brightness. The brightness predictions can be obtained using any analytical model chosen by the user. The inference for a...the analytical model as possible; and to mitigate the effect of bias that could be introduced by the choice of analytical model . It considers that a
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Specialized data analysis of SSME and advanced propulsion system vibration measurements
NASA Technical Reports Server (NTRS)
Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi
1993-01-01
The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Furlow, Carolyn F.
2006-01-01
Meta-analytic structural equation modeling (MA-SEM) is increasingly being used to assess model-fit for variables' interrelations synthesized across studies. MA-SEM researchers have analyzed synthesized correlation matrices using structural equation modeling (SEM) estimation that is designed for covariance matrices. This can produce incorrect…
Probabilistic assessment methodology for continuous-type petroleum accumulations
Crovelli, R.A.
2003-01-01
The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.
Thermal Modeling of Resistance Spot Welding and Prediction of Weld Microstructure
NASA Astrophysics Data System (ADS)
Sheikhi, M.; Valaee Tale, M.; Usefifar, GH. R.; Fattah-Alhosseini, Arash
2017-11-01
The microstructure of nuggets in resistance spot welding can be influenced by the many variables involved. This study aimed at examining such a relationship and, consequently, put forward an analytical model to predict the thermal history and microstructure of the nugget zone. Accordingly, a number of numerical simulations and experiments were conducted and the accuracy of the model was assessed. The results of this assessment revealed that the proposed analytical model could accurately predict the cooling rate in the nugget and heat-affected zones. Moreover, both analytical and numerical models confirmed that sheet thickness and electrode-sheet interface temperature were the most important factors influencing the cooling rate at temperatures lower than about T l/2. Decomposition of austenite is one of the most important transformations in steels occurring over this temperature range. Therefore, an easy-to-use map was designed against these parameters to predict the weld microstructure.
Factor analytic tools such as principal component analysis (PCA) and positive matrix factorization (PMF), suffer from rotational ambiguity in the results: different solutions (factors) provide equally good fits to the measured data. The PMF model imposes non-negativity of both...
Multilayered Word Structure Model for Assessing Spelling of Finnish Children in Shallow Orthography
ERIC Educational Resources Information Center
Kulju, Pirjo; Mäkinen, Marita
2017-01-01
This study explores Finnish children's word-level spelling by applying a linguistically based multilayered word structure model for assessing spelling performance. The model contributes to the analytical qualitative assessment approach in order to identify children's spelling performance for enhancing writing skills. The children (N = 105)…
Pavement Performance : Approaches Using Predictive Analytics
DOT National Transportation Integrated Search
2018-03-23
Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...
The role of decision analytic modeling in the health economic assessment of spinal intervention.
Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J
2014-10-15
Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or patient care decisions. 4.
Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard
ERIC Educational Resources Information Center
Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.
2017-01-01
This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…
Thermoelastic damping in microrings with circular cross-section
NASA Astrophysics Data System (ADS)
Li, Pu; Fang, Yuming; Zhang, Jianrun
2016-01-01
Predicting thermoelastic damping (TED) is crucial in the design of high Q micro-resonators. Microrings are often critical components in many micro-resonators. Some analytical models for TED in microrings have already been developed in the past. However, the previous works are limited to the microrings with rectangular cross-section. The temperature field in the rectangular cross-section is one-dimensional. This paper deals with TED in the microrings with circular cross-section. The temperature field in the circular cross-section is two-dimensional. This paper first presents a 2-D analytical model for TED in the microrings with circular cross-section. Only the two-dimensional heat conduction in the circular cross-section is considered. The heat conduction along the circumferential direction of the microring is neglected in the 2-D model. Then the 2-D model has been extended to cover the circumferential heat conduction, and a 3-D analytical model for TED has been developed. The analytical results from the present 2-D and 3-D models show good agreement with the numerical results of FEM model. The limitations of the present 2-D analytical model are assessed.
Crovelli, Robert A.; revised by Charpentier, Ronald R.
2012-01-01
The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.
ANALYTICAL TOOLS FOR GROUNDWATER POLLUTION ASSESSMENT
This paper deals with the development of analytical screening-exposure models (indices) and their potential application to regulate the use of hazardous chemicals and the design of groundwater buffer strips. The indices describe the leaching of solutes below the root zone (mass f...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turcotte, Melissa; Moore, Juston Shane
User Behaviour Analytics is the tracking, collecting and assessing of user data and activities. The goal is to detect misuse of user credentials by developing models for the normal behaviour of user credentials within a computer network and detect outliers with respect to their baseline.
International Space Station Model Correlation Analysis
NASA Technical Reports Server (NTRS)
Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael
2018-01-01
This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.
Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark
2011-01-01
Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
NASA Astrophysics Data System (ADS)
Li, Yingchun; Wu, Wei; Li, Bo
2018-05-01
Jointed rock masses during underground excavation are commonly located under the constant normal stiffness (CNS) condition. This paper presents an analytical formulation to predict the shear behaviour of rough rock joints under the CNS condition. The dilatancy and deterioration of two-order asperities are quantified by considering the variation of normal stress. We separately consider the dilation angles of waviness and unevenness, which decrease to zero as the normal stress approaches the transitional stress. The sinusoidal function naturally yields the decay of dilation angle as a function of relative normal stress. We assume that the magnitude of transitional stress is proportionate to the square root of asperity geometric area. The comparison between the analytical prediction and experimental data shows the reliability of the analytical model. All the parameters involved in the analytical model possess explicit physical meanings and are measurable from laboratory tests. The proposed model is potentially practicable for assessing the stability of underground structures at various field scales.
NASA Astrophysics Data System (ADS)
Laminack, William; Gole, James
2015-12-01
A unique MEMS/NEMS approach is presented for the modeling of a detection platform for mixed gas interactions. Mixed gas analytes interact with nanostructured decorating metal oxide island sites supported on a microporous silicon substrate. The Inverse Hard/Soft acid/base (IHSAB) concept is used to assess a diversity of conductometric responses for mixed gas interactions as a function of these nanostructured metal oxides. The analyte conductometric responses are well represented using a combination diffusion/absorption-based model for multi-gas interactions where a newly developed response absorption isotherm, based on the Fermi distribution function is applied. A further coupling of this model with the IHSAB concept describes the considerations in modeling of multi-gas mixed analyte-interface, and analyte-analyte interactions. Taking into account the molecular electronic interaction of both the analytes with each other and an extrinsic semiconductor interface we demonstrate how the presence of one gas can enhance or diminish the reversible interaction of a second gas with the extrinsic semiconductor interface. These concepts demonstrate important considerations in the array-based formats for multi-gas sensing and its applications.
Bayes Nets in Educational Assessment: Where Do the Numbers Come from? CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell G.; Yan, Duanli; Steinberg, Linda S.
Educational assessments that exploit advances in technology and cognitive psychology can produce observations and pose student models that outstrip familiar test-theoretic models and analytic methods. Bayesian inference networks (BINs), which include familiar models and techniques as special cases, can be used to manage belief about students'…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakur, Gautam; Olama, Mohammed M; McNair, Wade
Data-driven assessments and adaptive feedback are becoming a cornerstone research in educational data analytics and involve developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the students and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present our efforts in using data analytics that enable educationists to design novel data-driven assessment and feedback mechanisms. In order to achieve this objective, we investigate temporal stabilitymore » of students grades and perform predictive analytics on academic data collected from 2009 through 2013 in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for assessments and predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total Grade Point Average(GPA) at the same term they enrolled in the course. Second, time series models in both frequency and time domains are applied to characterize the progression as well as overall projections of the grades. In particular, the model analyzed the stability as well as fluctuation of grades among students during the collegiate years (from freshman to senior) and disciplines. Third, Logistic Regression and Neural Network predictive models are used to identify students as early as possible who are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. The time series analysis indicates that assessments and continuous feedback are critical for freshman and sophomores (even with easy courses) than for seniors, and those assessments may be provided using the predictive models. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy. Our results show that there are strong ties associated with the first few weeks for coursework and they have an impact on the design and distribution of individual modules.« less
Source-term development for a contaminant plume for use by multimedia risk assessment models
NASA Astrophysics Data System (ADS)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.
2000-02-01
Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.
Review and assessment of the HOST turbine heat transfer program
NASA Technical Reports Server (NTRS)
Gladden, Herbert J.
1988-01-01
The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena occurring in high-performance gas turbine engines and to assess and improve the analytical methods used to predict the fluid dynamics and heat transfer phenomena. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. Therefore, a building-block approach was utilized, with research ranging from the study of fundamental phenomena and analytical modeling to experiments in simulated real-engine environments. Experimental research accounted for 75 percent of the project, and analytical efforts accounted for approximately 25 percent. Extensive experimental datasets were created depicting the three-dimensional flow field, high free-stream turbulence, boundary-layer transition, blade tip region heat transfer, film cooling effects in a simulated engine environment, rough-wall cooling enhancement in a rotating passage, and rotor-stator interaction effects. In addition, analytical modeling of these phenomena was initiated using boundary-layer assumptions as well as Navier-Stokes solutions.
An integrated model of clinical reasoning: dual-process theory of cognition and metacognition.
Marcum, James A
2012-10-01
Clinical reasoning is an important component for providing quality medical care. The aim of the present paper is to develop a model of clinical reasoning that integrates both the non-analytic and analytic processes of cognition, along with metacognition. The dual-process theory of cognition (system 1 non-analytic and system 2 analytic processes) and the metacognition theory are used to develop an integrated model of clinical reasoning. In the proposed model, clinical reasoning begins with system 1 processes in which the clinician assesses a patient's presenting symptoms, as well as other clinical evidence, to arrive at a differential diagnosis. Additional clinical evidence, if necessary, is acquired and analysed utilizing system 2 processes to assess the differential diagnosis, until a clinical decision is made diagnosing the patient's illness and then how best to proceed therapeutically. Importantly, the outcome of these processes feeds back, in terms of metacognition's monitoring function, either to reinforce or to alter cognitive processes, which, in turn, enhances synergistically the clinician's ability to reason quickly and accurately in future consultations. The proposed integrated model has distinct advantages over other models proposed in the literature for explicating clinical reasoning. Moreover, it has important implications for addressing the paradoxical relationship between experience and expertise, as well as for designing a curriculum to teach clinical reasoning skills. © 2012 Blackwell Publishing Ltd.
Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.
2017-01-01
Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237
Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.
Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis
2016-07-01
Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.
Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers
2006-01-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...
Turbofan forced mixer lobe flow modeling. 1: Experimental and analytical assessment
NASA Technical Reports Server (NTRS)
Barber, T.; Paterson, R. W.; Skebe, S. A.
1988-01-01
A joint analytical and experimental investigation of three-dimensional flowfield development within the lobe region of turbofan forced mixer nozzles is described. The objective was to develop a method for predicting the lobe exit flowfield. In the analytical approach, a linearized inviscid aerodynamical theory was used for representing the axial and secondary flows within the three-dimensional convoluted mixer lobes and three-dimensional boundary layer analysis was applied thereafter to account for viscous effects. The experimental phase of the program employed three planar mixer lobe models having different waveform shapes and lobe heights for which detailed measurements were made of the three-dimensional velocity field and total pressure field at the lobe exit plane. Velocity data was obtained using Laser Doppler Velocimetry (LDV) and total pressure probing and hot wire anemometry were employed to define exit plane total pressure and boundary layer development. Comparison of data and analysis was performed to assess analytical model prediction accuracy. As a result of this study a planar mixed geometry analysis was developed. A principal conclusion is that the global mixer lobe flowfield is inviscid and can be predicted from an inviscid analysis and Kutta condition.
Shin, Taeksoo; Kim, Chun-Bae; Ahn, Yang-Heui; Kim, Hyo-Youl; Cha, Byung Ho; Uh, Young; Lee, Joo-Heon; Hyun, Sook-Jung; Lee, Dong-Han; Go, Un-Yeong
2009-01-29
The purpose of this paper is to propose new evaluation criteria and an analytic hierarchy process (AHP) model to assess the expanded national immunization programs (ENIPs) and to evaluate two alternative health care policies. One of the alternative policies is that private clinics and hospitals would offer free vaccination services to children and the other of them is that public health centers would offer these free vaccination services. Our model to evaluate the ENIPs was developed using brainstorming, Delphi techniques, and the AHP model. We first used the brainstorming and Delphi techniques, as well as literature reviews, to determine 25 criteria with which to evaluate the national immunization policy; we then proposed a hierarchical structure of the AHP model to assess ENIPs. By applying the proposed AHP model to the assessment of ENIPs for Korean immunization policies, we show that free vaccination services should be provided by private clinics and hospitals rather than public health centers.
Yang, Yong; Liu, Yongzhong; Yu, Bo; Ding, Tian
2016-06-01
Volatile contaminants may migrate with carbon dioxide (CO2) injection or leakage in subsurface formations, which leads to the risk of the CO2 storage and the ecological environment. This study aims to develop an analytical model that could predict the contaminant migration process induced by CO2 storage. The analytical model with two moving boundaries is obtained through the simplification of the fully coupled model for the CO2-aqueous phase -stagnant phase displacement system. The analytical solutions are confirmed and assessed through the comparison with the numerical simulations of the fully coupled model. Then, some key variables in the analytical solutions, including the critical time, the locations of the dual moving boundaries and the advance velocity, are discussed to present the characteristics of contaminant migration in the multi-phase displacement system. The results show that these key variables are determined by four dimensionless numbers, Pe, RD, Sh and RF, which represent the effects of the convection, the dispersion, the interphase mass transfer and the retention factor of contaminant, respectively. The proposed analytical solutions could be used for tracking the migration of the injected CO2 and the contaminants in subsurface formations, and also provide an analytical tool for other solute transport in multi-phase displacement system. Copyright © 2016 Elsevier B.V. All rights reserved.
Impact and Penetration Simulations for Composite Wing-like Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F.
1998-01-01
The goal of this research project was to develop methodologies for the analysis of wing-like structures subjected to impact loadings. Low-speed impact causing either no damage or only minimal damage and high-speed impact causing severe laminate damage and possible penetration of the structure were to be considered during this research effort. To address this goal, an assessment of current analytical tools for impact analysis was performed. Assessment of the analytical tools for impact and penetration simulations with regard to accuracy, modeling, and damage modeling was considered as well as robustness, efficient, and usage in a wing design environment. Following a qualitative assessment, selected quantitative evaluations will be performed using the leading simulation tools. Based on this assessment, future research thrusts for impact and penetration simulation of composite wing-like structures were identified.
Johnson, Christie
2016-01-01
This poster presentation presents a content modeling strategy using the SNOMED CT Observable Model to represent large amounts of detailed clinical data in a consistent and computable manner that can support multiple use cases. Lightweight Expression of Granular Objects (LEGOs) represent question/answer pairs on clinical data collection forms, where a question is modeled by a (usually) post-coordinated SNOMED CT expression. LEGOs transform electronic patient data into a normalized consumable, which means that the expressions can be treated as extensions of the SNOMED CT hierarchies for the purpose of performing subsumption queries and other analytics. Utilizing the LEGO approach for modeling clinical data obtained from a nursing admission assessment provides a foundation for data exchange across disparate information systems and software applications. Clinical data exchange of computable LEGO patient information enables the development of more refined data analytics, data storage and clinical decision support.
Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I
2015-05-01
This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
Nonlinear Acoustical Assessment of Precipitate Nucleation
NASA Technical Reports Server (NTRS)
Cantrell, John H.; Yost, William T.
2004-01-01
The purpose of the present work is to show that measurements of the acoustic nonlinearity parameter in heat treatable alloys as a function of heat treatment time can provide quantitative information about the kinetics of precipitate nucleation and growth in such alloys. Generally, information on the kinetics of phase transformations is obtained from time-sequenced electron microscopical examination and differential scanning microcalorimetry. The present nonlinear acoustical assessment of precipitation kinetics is based on the development of a multiparameter analytical model of the effects on the nonlinearity parameter of precipitate nucleation and growth in the alloy system. A nonlinear curve fit of the model equation to the experimental data is then used to extract the kinetic parameters related to the nucleation and growth of the targeted precipitate. The analytical model and curve fit is applied to the assessment of S' precipitation in aluminum alloy 2024 during artificial aging from the T4 to the T6 temper.
Diagnosing Alzheimer's disease: a systematic review of economic evaluations.
Handels, Ron L H; Wolfs, Claire A G; Aalten, Pauline; Joore, Manuela A; Verhey, Frans R J; Severens, Johan L
2014-03-01
The objective of this study is to systematically review the literature on economic evaluations of interventions for the early diagnosis of Alzheimer's disease (AD) and related disorders and to describe their general and methodological characteristics. We focused on the diagnostic aspects of the decision models to assess the applicability of existing decision models for the evaluation of the recently revised diagnostic research criteria for AD. PubMed and the National Institute for Health Research Economic Evaluation database were searched for English-language publications related to economic evaluations on diagnostic technologies. Trial-based economic evaluations were assessed using the Consensus on Health Economic Criteria list. Modeling studies were assessed using the framework for quality assessment of decision-analytic models. The search retrieved 2109 items, from which eight decision-analytic modeling studies and one trial-based economic evaluation met all eligibility criteria. Diversity among the study objective and characteristics was considerable and, despite considerable methodological quality, several flaws were indicated. Recommendations were focused on diagnostic aspects and the applicability of existing models for the evaluation of recently revised diagnostic research criteria for AD. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.
Oztürk, Necla; Tozan, Hakan
2015-01-01
Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.
Devriendt, Floris; Moldovan, Darie; Verbeke, Wouter
2018-03-01
Prescriptive analytics extends on predictive analytics by allowing to estimate an outcome in function of control variables, allowing as such to establish the required level of control variables for realizing a desired outcome. Uplift modeling is at the heart of prescriptive analytics and aims at estimating the net difference in an outcome resulting from a specific action or treatment that is applied. In this article, a structured and detailed literature survey on uplift modeling is provided by identifying and contrasting various groups of approaches. In addition, evaluation metrics for assessing the performance of uplift models are reviewed. An experimental evaluation on four real-world data sets provides further insight into their use. Uplift random forests are found to be consistently among the best performing techniques in terms of the Qini and Gini measures, although considerable variability in performance across the various data sets of the experiments is observed. In addition, uplift models are frequently observed to be unstable and display a strong variability in terms of performance across different folds in the cross-validation experimental setup. This potentially threatens their actual use for business applications. Moreover, it is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model. Specifically, existing evaluation metrics do not facilitate a comparison of uplift models and predictive models and evaluate performance either at an arbitrary cutoff or over the full spectrum of potential cutoffs. In conclusion, we highlight the instability of uplift models and the need for an application-oriented approach to assess uplift models as prime topics for further research.
An Analytical Performance Assessment of a Fuel Cell-powered, Small Electric Airplane
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Freeh, Joshua E.; Wickenheiser, Timothy J.
2003-01-01
Rapidly emerging fuel cell power technologies may be used to launch a new revolution of electric propulsion systems for light aircraft. Future small electric airplanes using fuel cell technologies hold the promise of high reliability, low maintenance, low noise, and with exception of water vapor zero emissions. This paper describes an analytical feasibility and performance assessment conducted by NASA's Glenn Research Center of a fuel cell-powered, propeller-driven, small electric airplane based on a model of the MCR 01 two-place kitplane.
NASA Astrophysics Data System (ADS)
Quinta-Nova, Luis; Fernandez, Paulo; Pedro, Nuno
2017-12-01
This work focuses on developed a decision support system based on multicriteria spatial analysis to assess the potential for generation of biomass residues from forestry sources in a region of Portugal (Beira Baixa). A set of environmental, economic and social criteria was defined, evaluated and weighted in the context of Saaty’s analytic hierarchies. The best alternatives were obtained after applying Analytic Hierarchy Process (AHP). The model was applied to the central region of Portugal where forest and agriculture are the most representative land uses. Finally, sensitivity analysis of the set of factors and their associated weights was performed to test the robustness of the model. The proposed evaluation model provides a valuable reference for decision makers in establishing a standardized means of selecting the optimal location for new biomass plants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, John J.; Greer, Christopher B.; Carr, Adrianne E.
2014-10-01
The purpose of this study is to update a one-dimensional analytical groundwater flow model to examine the influence of potential groundwater withdrawal in support of utility-scale solar energy development at the Afton Solar Energy Zone (SEZ) as a part of the Bureau of Land Management’s (BLM’s) Solar Energy Program. This report describes the modeling for assessing the drawdown associated with SEZ groundwater pumping rates for a 20-year duration considering three categories of water demand (high, medium, and low) based on technology-specific considerations. The 2012 modeling effort published in the Final Programmatic Environmental Impact Statement for Solar Energy Development in Sixmore » Southwestern States (Solar PEIS; BLM and DOE 2012) has been refined based on additional information described below in an expanded hydrogeologic discussion.« less
Koch, Cosima; Posch, Andreas E; Goicoechea, Héctor C; Herwig, Christoph; Lendl, Bernhard
2014-01-07
This paper presents the quantification of Penicillin V and phenoxyacetic acid, a precursor, inline during Pencillium chrysogenum fermentations by FTIR spectroscopy and partial least squares (PLS) regression and multivariate curve resolution - alternating least squares (MCR-ALS). First, the applicability of an attenuated total reflection FTIR fiber optic probe was assessed offline by measuring standards of the analytes of interest and investigating matrix effects of the fermentation broth. Then measurements were performed inline during four fed-batch fermentations with online HPLC for the determination of Penicillin V and phenoxyacetic acid as reference analysis. PLS and MCR-ALS models were built using these data and validated by comparison of single analyte spectra with the selectivity ratio of the PLS models and the extracted spectral traces of the MCR-ALS models, respectively. The achieved root mean square errors of cross-validation for the PLS regressions were 0.22 g L(-1) for Penicillin V and 0.32 g L(-1) for phenoxyacetic acid and the root mean square errors of prediction for MCR-ALS were 0.23 g L(-1) for Penicillin V and 0.15 g L(-1) for phenoxyacetic acid. A general work-flow for building and assessing chemometric regression models for the quantification of multiple analytes in bioprocesses by FTIR spectroscopy is given. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Foster, Katherine T; Beltz, Adriene M
2018-08-01
Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.
Bridging analytical approaches for low-carbon transitions
NASA Astrophysics Data System (ADS)
Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.
2016-06-01
Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.
Validation of urban freeway models. [supporting datasets
DOT National Transportation Integrated Search
2015-01-01
The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...
Analytical flow duration curves for summer streamflow in Switzerland
NASA Astrophysics Data System (ADS)
Santos, Ana Clara; Portela, Maria Manuela; Rinaldo, Andrea; Schaefli, Bettina
2018-04-01
This paper proposes a systematic assessment of the performance of an analytical modeling framework for streamflow probability distributions for a set of 25 Swiss catchments. These catchments show a wide range of hydroclimatic regimes, including namely snow-influenced streamflows. The model parameters are calculated from a spatially averaged gridded daily precipitation data set and from observed daily discharge time series, both in a forward estimation mode (direct parameter calculation from observed data) and in an inverse estimation mode (maximum likelihood estimation). The performance of the linear and the nonlinear model versions is assessed in terms of reproducing observed flow duration curves and their natural variability. Overall, the nonlinear model version outperforms the linear model for all regimes, but the linear model shows a notable performance increase with catchment elevation. More importantly, the obtained results demonstrate that the analytical model performs well for summer discharge for all analyzed streamflow regimes, ranging from rainfall-driven regimes with summer low flow to snow and glacier regimes with summer high flow. These results suggest that the model's encoding of discharge-generating events based on stochastic soil moisture dynamics is more flexible than previously thought. As shown in this paper, the presence of snowmelt or ice melt is accommodated by a relative increase in the discharge-generating frequency, a key parameter of the model. Explicit quantification of this frequency increase as a function of mean catchment meteorological conditions is left for future research.
[Real-time detection of quality of Chinese materia medica: strategy of NIR model evaluation].
Wu, Zhi-sheng; Shi, Xin-yuan; Xu, Bing; Dai, Xing-xing; Qiao, Yan-jiang
2015-07-01
The definition of critical quality attributes of Chinese materia medica ( CMM) was put forward based on the top-level design concept. Nowadays, coupled with the development of rapid analytical science, rapid assessment of critical quality attributes of CMM was firstly carried out, which was the secondary discipline branch of CMM. Taking near infrared (NIR) spectroscopy as an example, which is a rapid analytical technology in pharmaceutical process over the past decade, systematic review is the chemometric parameters in NIR model evaluation. According to the characteristics of complexity of CMM and trace components analysis, a multi-source information fusion strategy of NIR model was developed for assessment of critical quality attributes of CMM. The strategy has provided guideline for NIR reliable analysis in critical quality attributes of CMM.
Assessment of eutrophication in estuaries: Pressure-state-response and source apportionment
David Whitall; Suzanne Bricker
2006-01-01
The National Estuarine Eutrophication Assessment (NEEA) Update Program is a management oriented program designed to improve monitoring and assessment efforts through the development of type specific classification of estuaries that will allow improved assessment methods and development of analytical and research models and tools for managers which will help guide and...
1984-09-01
to Management Science (Third Edition). St. Paul: West Publishing Co., 1982. 2. Bennett, John L. (Editor). Building Decision Support Systems. Reading...Starts 700 DCCs 5000 Units Inventoried 50000 103 * * Bibliography /, 1. Anderson, David R., Dennis J. Sweeney, and Thomas A. Williams. An Introduction
Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid
2014-01-01
Background: Policy decisions regarding climate change mitigation are increasingly incorporating the beneficial and adverse health impacts of greenhouse gas emission reduction strategies. Studies of such co-benefits and co-harms involve modeling approaches requiring a range of analytic decisions that affect the model output. Objective: Our objective was to assess analytic decisions regarding model framework, structure, choice of parameters, and handling of uncertainty when modeling health co-benefits, and to make recommendations for improvements that could increase policy uptake. Methods: We describe the assumptions and analytic decisions underlying models of mitigation co-benefits, examining their effects on modeling outputs, and consider tools for quantifying uncertainty. Discussion: There is considerable variation in approaches to valuation metrics, discounting methods, uncertainty characterization and propagation, and assessment of low-probability/high-impact events. There is also variable inclusion of adverse impacts of mitigation policies, and limited extension of modeling domains to include implementation considerations. Going forward, co-benefits modeling efforts should be carried out in collaboration with policy makers; these efforts should include the full range of positive and negative impacts and critical uncertainties, as well as a range of discount rates, and should explicitly characterize uncertainty. We make recommendations to improve the rigor and consistency of modeling of health co-benefits. Conclusion: Modeling health co-benefits requires systematic consideration of the suitability of model assumptions, of what should be included and excluded from the model framework, and how uncertainty should be treated. Increased attention to these and other analytic decisions has the potential to increase the policy relevance and application of co-benefits modeling studies, potentially helping policy makers to maximize mitigation potential while simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270
Determination of Uncertainties for the New SSME Model
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Hawk, Clark W.
1996-01-01
This report discusses the uncertainty analysis performed in support of a new test analysis and performance prediction model for the Space Shuttle Main Engine. The new model utilizes uncertainty estimates for experimental data and for the analytical model to obtain the most plausible operating condition for the engine system. This report discusses the development of the data sets and uncertainty estimates to be used in the development of the new model. It also presents the application of uncertainty analysis to analytical models and the uncertainty analysis for the conservation of mass and energy balance relations is presented. A new methodology for the assessment of the uncertainty associated with linear regressions is presented.
A dashboard-based system for supporting diabetes care.
Dagliati, Arianna; Sacchi, Lucia; Tibollo, Valentina; Cogni, Giulia; Teliti, Marsida; Martinez-Millana, Antonio; Traver, Vicente; Segagni, Daniele; Posada, Jorge; Ottaviano, Manuel; Fico, Giuseppe; Arredondo, Maria Teresa; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo
2018-05-01
To describe the development, as part of the European Union MOSAIC (Models and Simulation Techniques for Discovering Diabetes Influence Factors) project, of a dashboard-based system for the management of type 2 diabetes and assess its impact on clinical practice. The MOSAIC dashboard system is based on predictive modeling, longitudinal data analytics, and the reuse and integration of data from hospitals and public health repositories. Data are merged into an i2b2 data warehouse, which feeds a set of advanced temporal analytic models, including temporal abstractions, care-flow mining, drug exposure pattern detection, and risk-prediction models for type 2 diabetes complications. The dashboard has 2 components, designed for (1) clinical decision support during follow-up consultations and (2) outcome assessment on populations of interest. To assess the impact of the clinical decision support component, a pre-post study was conducted considering visit duration, number of screening examinations, and lifestyle interventions. A pilot sample of 700 Italian patients was investigated. Judgments on the outcome assessment component were obtained via focus groups with clinicians and health care managers. The use of the decision support component in clinical activities produced a reduction in visit duration (P ≪ .01) and an increase in the number of screening exams for complications (P < .01). We also observed a relevant, although nonstatistically significant, increase in the proportion of patients receiving lifestyle interventions (from 69% to 77%). Regarding the outcome assessment component, focus groups highlighted the system's capability of identifying and understanding the characteristics of patient subgroups treated at the center. Our study demonstrates that decision support tools based on the integration of multiple-source data and visual and predictive analytics do improve the management of a chronic disease such as type 2 diabetes by enacting a successful implementation of the learning health care system cycle.
Autonomous Soil Assessment System: A Data-Driven Approach to Planetary Mobility Hazard Detection
NASA Astrophysics Data System (ADS)
Raimalwala, K.; Faragalli, M.; Reid, E.
2018-04-01
The Autonomous Soil Assessment System predicts mobility hazards for rovers. Its development and performance are presented, with focus on its data-driven models, machine learning algorithms, and real-time sensor data fusion for predictive analytics.
Deformation of Polymer Composites in Force Protection Systems
NASA Astrophysics Data System (ADS)
Nazarian, Oshin
Systems used for protecting personnel, vehicles and infrastructure from ballistic and blast threats derive their performance from a combination of the intrinsic properties of the constituent materials and the way in which the materials are arranged and attached to one another. The present work addresses outstanding issues in both the intrinsic properties of high-performance fiber composites and the consequences of how such composites are integrated into force protection systems. One aim is to develop a constitutive model for the large-strain intralaminar shear deformation of an ultra-high molecular weight polyethylene (UHMWPE) fiber-reinforced composite. To this end, an analytical model based on a binary representation of the constituent phases is developed and validated using finite element analyses. The model is assessed through comparisons with experimental measurements on cross-ply composite specimens in the +/-45° orientation. The hardening behavior and the limiting tensile strain are attributable to rotations of fibers in the plastic domain and the effects of these rotations on the internal stress state. The model is further assessed through quasi-static punch experiments and dynamic impact tests using metal foam projectiles. The finite element model based on this model accurately captures both the back-face deflection-time history and the final plate profile (especially the changes caused by fiber pull-in). A separate analytical framework for describing the accelerations caused by head impact during, for example, the secondary collision of a vehicle occupant with the cabin interior during an external event is also presented. The severity of impact, characterized by the Head Injury Criterion (HIC), is used to assess the efficacy of crushable foams in mitigating head injury. The framework is used to identify the optimal foam strength that minimizes the HIC for prescribed mass and velocity, subject to constraints on foam thickness. The predictive capability of the model is evaluated through comparisons with a series of experimental measurements from impacts of an instrumented headform onto several commercial foams. Additional comparisons are made with the results of finite element simulations. An analytical model for the planar impact of a cylindrical mass on a foam is also developed. This model sets a theoretical bound for the reduction in HIC by utilizing a "plate-on-foam" design. Experimental results of impact tests on foams coupled with stiff composite plates are presented, with comparisons to the theoretical limits predicted by the analytical model. Design maps are developed from the analytical models, illustrating the variations in the HIC with foam strength and impact velocity.
Gravity Field Recovery from the Cartwheel Formation by the Semi-analytical Approach
NASA Astrophysics Data System (ADS)
Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico; Zhong, Min; Zhou, Zebing
2016-04-01
Past and current gravimetric satellite missions have contributed drastically to our knowledge of the Earth's gravity field. Nevertheless, several geoscience disciplines push for even higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure. With respect to other methods, one significant advantage of the semi-analytical approach is its effective pre-mission error assessment for gravity field missions. The semi-analytical approach builds a linear analytical relationship between the Fourier spectrum of the observables and the spherical harmonic spectrum of the gravity field. The spectral link between observables and gravity field parameters is given by the transfer coefficients, which constitutes the observation model. In connection with a stochastic model, it can be used for pre-mission error assessment of gravity field mission. The cartwheel formation is formed by two satellites on elliptic orbits in the same plane. The time dependent ranging will be considered in the transfer coefficients via convolution including the series expansion of the eccentricity functions. The transfer coefficients are applied to assess the error patterns, which are caused by different orientation of the cartwheel for range-rate and range acceleration. This work will present the isotropy and magnitude of the formal errors of the gravity field coefficients, for different orientations of the cartwheel.
An Improved Analytic Model for Microdosimeter Response
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Xapsos, Michael A.
2001-01-01
An analytic model used to predict energy deposition fluctuations in a microvolume by ions through direct events is improved to include indirect delta ray events. The new model can now account for the increase in flux at low lineal energy when the ions are of very high energy. Good agreement is obtained between the calculated results and available data for laboratory ion beams. Comparison of GCR (galactic cosmic ray) flux between Shuttle TEPC (tissue equivalent proportional counter) flight data and current calculations draws a different assessment of developmental work required for the GCR transport code (HZETRN) than previously concluded.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David
2015-01-01
Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908
An analytical poroelastic model for ultrasound elastography imaging of tumors
NASA Astrophysics Data System (ADS)
Tauhidul Islam, Md; Chaudhry, Anuj; Unnikrishnan, Ginu; Reddy, J. N.; Righetti, Raffaella
2018-01-01
The mechanical behavior of biological tissues has been studied using a number of mechanical models. Due to the relatively high fluid content and mobility, many biological tissues have been modeled as poroelastic materials. Diseases such as cancers are known to alter the poroelastic response of a tissue. Tissue poroelastic properties such as compressibility, interstitial permeability and fluid pressure also play a key role for the assessment of cancer treatments and for improved therapies. At the present time, however, a limited number of poroelastic models for soft tissues are retrievable in the literature, and the ones available are not directly applicable to tumors as they typically refer to uniform tissues. In this paper, we report the analytical poroelastic model for a non-uniform tissue under stress relaxation. Displacement, strain and fluid pressure fields in a cylindrical poroelastic sample containing a cylindrical inclusion during stress relaxation are computed. Finite element simulations are then used to validate the proposed theoretical model. Statistical analysis demonstrates that the proposed analytical model matches the finite element results with less than 0.5% error. The availability of the analytical model and solutions presented in this paper may be useful to estimate diagnostically relevant poroelastic parameters such as interstitial permeability and fluid pressure, and, in general, for a better interpretation of clinically-relevant ultrasound elastography results.
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
The geospatial modeling interface (GMI) framework for deploying and assessing environmental models
USDA-ARS?s Scientific Manuscript database
Geographical information systems (GIS) software packages have been used for close to three decades as analytical tools in environmental management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of ful...
Energy Analytics Campaign > 2014-2018 Assessment of Automated M&V Methods > 2012-2018 Better Assessment of automated measurement and verification methods. Granderson, J. et al. Lawrence Berkeley . PDF, 726 KB Performance Metrics and Objective Testing Methods for Energy Baseline Modeling Software
78 FR 1856 - Availability of Draft Chemical Risk Assessments; Public Comment Opportunity
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-09
... bioaccumulation, environmental risk assessment (aquatic and terrestrial), and analytical chemistry of organic... organics, experts on use of volatiles as solvent degreasers and in the arts/crafts field, chemical...: Exposure modeling, aquatic ecotoxicology, terrestrial ecotoxicology, inorganic chemistry addressing water...
42 CFR 493.1289 - Standard: Analytic systems quality assessment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a) The... through 493.1283. (b) The analytic systems quality assessment must include a review of the effectiveness...
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
Development of a robust space power system decision model
NASA Astrophysics Data System (ADS)
Chew, Gilbert; Pelaccio, Dennis G.; Jacobs, Mark; Stancati, Michael; Cataldo, Robert
2001-02-01
NASA continues to evaluate power systems to support human exploration of the Moon and Mars. The system(s) would address all power needs of surface bases and on-board power for space transfer vehicles. Prior studies have examined both solar and nuclear-based alternatives with respect to individual issues such as sizing or cost. What has not been addressed is a comprehensive look at the risks and benefits of the options that could serve as the analytical framework to support a system choice that best serves the needs of the exploration program. This paper describes the SAIC developed Space Power System Decision Model, which uses a formal Two-step Analytical Hierarchy Process (TAHP) methodology that is used in the decision-making process to clearly distinguish candidate power systems in terms of benefits, safety, and risk. TAHP is a decision making process based on the Analytical Hierarchy Process, which employs a hierarchic approach of structuring decision factors by weights, and relatively ranks system design options on a consistent basis. This decision process also includes a level of data gathering and organization that produces a consistent, well-documented assessment, from which the capability of each power system option to meet top-level goals can be prioritized. The model defined on this effort focuses on the comparative assessment candidate power system options for Mars surface application(s). This paper describes the principles of this approach, the assessment criteria and weighting procedures, and the tools to capture and assess the expert knowledge associated with space power system evaluation. .
ICDA: A Platform for Intelligent Care Delivery Analytics
Gotz, David; Stavropoulos, Harry; Sun, Jimeng; Wang, Fei
2012-01-01
The identification of high-risk patients is a critical component in improving patient outcomes and managing costs. This paper describes the Intelligent Care Delivery Analytics platform (ICDA), a system which enables risk assessment analytics that process large collections of dynamic electronic medical data to identify at-risk patients. ICDA works by ingesting large volumes of data into a common data model, then orchestrating a collection of analytics that identify at-risk patients. It also provides an interactive environment through which users can access and review the analytics results. In addition, ICDA provides APIs via which analytics results can be retrieved to surface in external applications. A detailed review of ICDA’s architecture is provided. Descriptions of four use cases are included to illustrate ICDA’s application within two different data environments. These use cases showcase the system’s flexibility and exemplify the types of analytics it enables. PMID:23304296
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleimann, Jens; Fichtner, Horst; Röken, Christian, E-mail: jk@tp4.rub.de, E-mail: hf@tp4.rub.de, E-mail: christian.roeken@mathematik.uni-regensburg.de
A previously published analytical magnetohydrodynamic model for the local interstellar magnetic field in the vicinity of the heliopause (Röken et al. 2015) is extended from incompressible to compressible, yet predominantly subsonic flow, considering both isothermal and adiabatic equations of state. Exact expressions and suitable approximations for the density and the flow velocity are derived and discussed. In addition to the stationary induction equation, these expressions also satisfy the momentum balance equation along stream lines. The practical usefulness of the corresponding, still exact, analytical magnetic field solution is assessed by comparing it quantitatively to results from a fully self-consistent magnetohydrodynamic simulationmore » of the interstellar magnetic field draping around the heliopause.« less
THREAT ANTICIPATION AND DECEPTIVE REASONING USING BAYESIAN BELIEF NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E
Recent events highlight the need for tools to anticipate threats posed by terrorists. Assessing these threats requires combining information from disparate data sources such as analytic models, simulations, historical data, sensor networks, and user judgments. These disparate data can be combined in a coherent, analytically defensible, and understandable manner using a Bayesian belief network (BBN). In this paper, we develop a BBN threat anticipatory model based on a deceptive reasoning algorithm using a network engineering process that treats the probability distributions of the BBN nodes within the broader context of the system development process.
Jones, Graham R D; Albarede, Stephanie; Kesseler, Dagmar; MacKenzie, Finlay; Mammen, Joy; Pedersen, Morten; Stavelin, Anne; Thelen, Marc; Thomas, Annette; Twomey, Patrick J; Ventura, Emma; Panteghini, Mauro
2017-06-27
External Quality Assurance (EQA) is vital to ensure acceptable analytical quality in medical laboratories. A key component of an EQA scheme is an analytical performance specification (APS) for each measurand that a laboratory can use to assess the extent of deviation of the obtained results from the target value. A consensus conference held in Milan in 2014 has proposed three models to set APS and these can be applied to setting APS for EQA. A goal arising from this conference is the harmonisation of EQA APS between different schemes to deliver consistent quality messages to laboratories irrespective of location and the choice of EQA provider. At this time there are wide differences in the APS used in different EQA schemes for the same measurands. Contributing factors to this variation are that the APS in different schemes are established using different criteria, applied to different types of data (e.g. single data points, multiple data points), used for different goals (e.g. improvement of analytical quality; licensing), and with the aim of eliciting different responses from participants. This paper provides recommendations from the European Federation of Laboratory Medicine (EFLM) Task and Finish Group on Performance Specifications for External Quality Assurance Schemes (TFG-APSEQA) and on clear terminology for EQA APS. The recommended terminology covers six elements required to understand APS: 1) a statement on the EQA material matrix and its commutability; 2) the method used to assign the target value; 3) the data set to which APS are applied; 4) the applicable analytical property being assessed (i.e. total error, bias, imprecision, uncertainty); 5) the rationale for the selection of the APS; and 6) the type of the Milan model(s) used to set the APS. The terminology is required for EQA participants and other interested parties to understand the meaning of meeting or not meeting APS.
Ritrovato, Matteo; Faggiano, Francesco C; Tedesco, Giorgia; Derrico, Pietro
2015-06-01
This article outlines the Decision-Oriented Health Technology Assessment: a new implementation of the European network for Health Technology Assessment Core Model, integrating the multicriteria decision-making analysis by using the analytic hierarchy process to introduce a standardized methodological approach as a valued and shared tool to support health care decision making within a hospital. Following the Core Model as guidance (European network for Health Technology Assessment. HTA core model for medical and surgical interventions. Available from: http://www.eunethta.eu/outputs/hta-core-model-medical-and-surgical-interventions-10r. [Accessed May 27, 2014]), it is possible to apply the analytic hierarchy process to break down a problem into its constituent parts and identify priorities (i.e., assigning a weight to each part) in a hierarchical structure. Thus, it quantitatively compares the importance of multiple criteria in assessing health technologies and how the alternative technologies perform in satisfying these criteria. The verbal ratings are translated into a quantitative form by using the Saaty scale (Saaty TL. Decision making with the analytic hierarchy process. Int J Serv Sci 2008;1:83-98). An eigenvectors analysis is used for deriving the weights' systems (i.e., local and global weights' system) that reflect the importance assigned to the criteria and the priorities related to the performance of the alternative technologies. Compared with the Core Model, this methodological approach supplies a more timely as well as contextualized evidence for a specific technology, making it possible to obtain data that are more relevant and easier to interpret, and therefore more useful for decision makers to make investment choices with greater awareness. We reached the conclusion that although there may be scope for improvement, this implementation is a step forward toward the goal of building a "solid bridge" between the scientific evidence and the final decision maker's choice. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Participatory flood vulnerability assessment: a multi-criteria approach
NASA Astrophysics Data System (ADS)
Madruga de Brito, Mariana; Evers, Mariele; Delos Santos Almoradie, Adrian
2018-01-01
This paper presents a participatory multi-criteria decision-making (MCDM) approach for flood vulnerability assessment while considering the relationships between vulnerability criteria. The applicability of the proposed framework is demonstrated in the municipalities of Lajeado and Estrela, Brazil. The model was co-constructed by 101 experts from governmental organizations, universities, research institutes, NGOs, and private companies. Participatory methods such as the Delphi survey, focus groups, and workshops were applied. A participatory problem structuration, in which the modellers work closely with end users, was used to establish the structure of the vulnerability index. The preferences of each participant regarding the criteria importance were spatially modelled through the analytical hierarchy process (AHP) and analytical network process (ANP) multi-criteria methods. Experts were also involved at the end of the modelling exercise for validation. The final product is a set of individual and group flood vulnerability maps. Both AHP and ANP proved to be effective for flood vulnerability assessment; however, ANP is preferred as it considers the dependences among criteria. The participatory approach enabled experts to learn from each other and acknowledge different perspectives towards social learning. The findings highlight that to enhance the credibility and deployment of model results, multiple viewpoints should be integrated without forcing consensus.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
Ben-Assuli, Ofir; Leshno, Moshe
2016-09-01
In the last decade, health providers have implemented information systems to improve accuracy in medical diagnosis and decision-making. This article evaluates the impact of an electronic health record on emergency department physicians' diagnosis and admission decisions. A decision analytic approach using a decision tree was constructed to model the admission decision process to assess the added value of medical information retrieved from the electronic health record. Using a Bayesian statistical model, this method was evaluated on two coronary artery disease scenarios. The results show that the cases of coronary artery disease were better diagnosed when the electronic health record was consulted and led to more informed admission decisions. Furthermore, the value of medical information required for a specific admission decision in emergency departments could be quantified. The findings support the notion that physicians and patient healthcare can benefit from implementing electronic health record systems in emergency departments. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
The rate of bubble growth in a superheated liquid in pool boiling
NASA Astrophysics Data System (ADS)
Abdollahi, Mohammad Reza; Jafarian, Mehdi; Jamialahmadi, Mohammad
2017-12-01
A semi-empirical model for the estimation of the rate of bubble growth in nucleate pool boiling is presented, considering a new equation to estimate the temperature history of the bubble in the bulk of liquid. The conservation equations of energy, mass and momentum have been firstly derived and solved analytically. The present analytical model of the bubble growth predicts that the radius of the bubble grows as a function of √{t}.{\\operatorname{erf}}( N√{t}) , while so far the bubble growth rate has been mainly correlated to √{t} in the previous studies. In the next step, the analytical solutions were used to develop a new semi-empirical equation. To achieve this, firstly the analytical solution were non-dimensionalised and then the experimental data, available in the literature, were applied to tune the dimensionless coefficients appeared in the dimensionless equation. Finally, the reliability of the proposed semi-empirical model was assessed through comparison of the model predictions with the available experimental data in the literature, which were not applied in the tuning of the dimensionless parameters of the model. The comparison of the model predictions with other proposed models in the literature was also performed. These comparisons show that this model enables more accurate predictions than previously proposed models with a deviation of less than 10% in a wide range of operating conditions.
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
Vibrations and structureborne noise in space station
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1985-01-01
Theoretical models were developed capable of predicting structural response and noise transmission to random point mechanical loads. Fiber reinforced composite and aluminum materials were considered. Cylindrical shells and circular plates were taken as typical representatives of structural components for space station habitability modules. Analytical formulations include double wall and single wall constructions. Pressurized and unpressurized models were considered. Parametric studies were conducted to determine the effect on structural response and noise transmission due to fiber orientation, point load location, damping in the core and the main load carrying structure, pressurization, interior acoustic absorption, etc. These analytical models could serve as preliminary tools for assessing noise related problems, for space station applications.
Thermoelastic damping in thin microrings with two-dimensional heat conduction
NASA Astrophysics Data System (ADS)
Fang, Yuming; Li, Pu
2015-05-01
Accurate determination of thermoelastic damping (TED) is very challenging in the design of micro-resonators. Microrings are widely used in many micro-resonators. In the past, to model the TED effect on the microrings, some analytical models have been developed. However, in the previous works, the heat conduction within the microring is modeled by using the one-dimensional approach. The governing equation for heat conduction is solved only for the one-dimensional heat conduction along the radial thickness of the microring. This paper presents a simple analytical model for TED in microrings. The two-dimensional heat conduction over the thermoelastic temperature gradients along the radial thickness and the circumferential direction are considered in the present model. A two-dimensional heat conduction equation is developed. The solution of the equation is represented by the product of an assumed sine series along the radial thickness and an assumed trigonometric series along the circumferential direction. The analytical results obtained by the present 2-D model show a good agreement with the numerical (FEM) results. The limitations of the previous 1-D model are assessed.
Review and assessment of the database and numerical modeling for turbine heat transfer
NASA Technical Reports Server (NTRS)
Gladden, H. J.; Simoneau, R. J.
1988-01-01
The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.
Rotor/Wing Interactions in Hover
NASA Technical Reports Server (NTRS)
Young, Larry A.; Derby, Michael R.
2002-01-01
Hover predictions of tiltrotor aircraft are hampered by the lack of accurate and computationally efficient models for rotor/wing interactional aerodynamics. This paper summarizes the development of an approximate, potential flow solution for the rotor-on-rotor and wing-on-rotor interactions. This analysis is based on actuator disk and vortex theory and the method of images. The analysis is applicable for out-of-ground-effect predictions. The analysis is particularly suited for aircraft preliminary design studies. Flow field predictions from this simple analytical model are validated against experimental data from previous studies. The paper concludes with an analytical assessment of the influence of rotor-on-rotor and wing-on-rotor interactions. This assessment examines the effect of rotor-to-wing offset distance, wing sweep, wing span, and flaperon incidence angle on tiltrotor inflow and performance.
Cunningham, Virginia L; D'Aco, Vincent J; Pfeiffer, Danielle; Anderson, Paul D; Buzby, Mary E; Hannah, Robert E; Jahnke, James; Parke, Neil J
2012-07-01
This article presents the capability expansion of the PhATE™ (pharmaceutical assessment and transport evaluation) model to predict concentrations of trace organics in sludges and biosolids from municipal wastewater treatment plants (WWTPs). PhATE was originally developed as an empirical model to estimate potential concentrations of active pharmaceutical ingredients (APIs) in US surface and drinking waters that could result from patient use of medicines. However, many compounds, including pharmaceuticals, are not completely transformed in WWTPs and remain in biosolids that may be applied to land as a soil amendment. This practice leads to concerns about potential exposures of people who may come into contact with amended soils and also about potential effects to plants and animals living in or contacting such soils. The model estimates the mass of API in WWTP influent based on the population served, the API per capita use, and the potential loss of the compound associated with human use (e.g., metabolism). The mass of API on the treated biosolids is then estimated based on partitioning to primary and secondary solids, potential loss due to biodegradation in secondary treatment (e.g., activated sludge), and potential loss during sludge treatment (e.g., aerobic digestion, anaerobic digestion, composting). Simulations using 2 surrogate compounds show that predicted environmental concentrations (PECs) generated by PhATE are in very good agreement with measured concentrations, i.e., well within 1 order of magnitude. Model simulations were then carried out for 18 APIs representing a broad range of chemical and use characteristics. These simulations yielded 4 categories of results: 1) PECs are in good agreement with measured data for 9 compounds with high analytical detection frequencies, 2) PECs are greater than measured data for 3 compounds with high analytical detection frequencies, possibly as a result of as yet unidentified depletion mechanisms, 3) PECs are less than analytical reporting limits for 5 compounds with low analytical detection frequencies, and 4) the PEC is greater than the analytical method reporting limit for 1 compound with a low analytical detection frequency, possibly again as a result of insufficient depletion data. Overall, these results demonstrate that PhATE has the potential to be a very useful tool in the evaluation of APIs in biosolids. Possible applications include: prioritizing APIs for assessment even in the absence of analytical methods; evaluating sludge processing scenarios to explore potential mitigation approaches; using in risk assessments; and developing realistic nationwide concentrations, because PECs can be represented as a cumulative probability distribution. Finally, comparison of PECs to measured concentrations can also be used to identify the need for fate studies of compounds of interest in biosolids. Copyright © 2011 SETAC.
Doggui, Radhouene; El Ati-Hellal, Myriam; Traissac, Pierre; El Ati, Jalila
2018-03-26
Urinary iodine concentration (UIC) is commonly used to assess iodine status of subjects in epidemiological surveys. As pre-analytical factors are an important source of measurement error and studies about this phase are scarce, our objective was to assess the influence of urine sampling conditions on UIC, i.e., whether the child ate breakfast or not, urine void rank of the day, and time span between last meal and urine collection. A nationwide, two-stage, stratified, cross-sectional study including 1560 children (6-12 years) was performed in 2012. UIC was determined by the Sandell-Kolthoff method. Pre-analytical factors were assessed from children's mothers by using a questionnaire. Association between iodine status and pre-analytical factors were adjusted for one another and socio-economic characteristics by multivariate linear and multinomial regression models (RPR: relative prevalence ratios). Skipping breakfast prior to morning urine sampling decreased UIC by 40 to 50 μg/L and the proportion of UIC < 100 μg/L was higher among children having those skipped breakfast (RPR = 3.2[1.0-10.4]). In unadjusted analyses, UIC was less among children sampled more than 5 h from their last meal. UIC decreased with rank of urine void (e.g., first vs. second, P < 0.001); also, the proportion of UIC < 100 μg/L was greater among 4th rank samples (vs. second RPR = 2.1[1.1-4.0]). Subjects' breakfast status and urine void rank should be accounted for when assessing iodine status. Providing recommendations to standardize pre-analytical factors is a key step toward improving accuracy and comparability of survey results for assessing iodine status from spot urine samples. These recommendations have to be evaluated by future research.
AN INTERDISCIPLINARY APPROACH TO VALUING WATER FROM BRUSH CONTROL
An analytical methodology utilizing models from three disciplines is developed to assess the viability of brush control for wate yield in the Frio River Basin, TX. Ecological, hydrologic, and economic models are used to portray changes in forage production and water supply result...
Bagnasco, Lucia; Cosulich, M Elisabetta; Speranza, Giovanna; Medini, Luca; Oliveri, Paolo; Lanteri, Silvia
2014-08-15
The relationships between sensory attribute and analytical measurements, performed by electronic tongue (ET) and near-infrared spectroscopy (NIRS), were investigated in order to develop a rapid method for the assessment of umami taste. Commercially available umami products and some aminoacids were submitted to sensory analysis. Results were analysed in comparison with the outcomes of analytical measurements. Multivariate exploratory analysis was performed by principal component analysis (PCA). Calibration models for prediction of the umami taste on the basis of ET and NIR signals were obtained using partial least squares (PLS) regression. Different approaches for merging data from the two different analytical instruments were considered. Both of the techniques demonstrated to provide information related with umami taste. In particular, ET signals showed the higher correlation with umami attribute. Data fusion was found to be slightly beneficial - not so significantly as to justify the coupled use of the two analytical techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Measure of Adolescent Heterosocial Competence: Development and Initial Validation
ERIC Educational Resources Information Center
Grover, Rachel L.; Nangle, Douglas W.; Zeff, Karen R.
2005-01-01
We developed and began construct validation of the Measure of Adolescent Heterosocial Competence (MAHC), a self-report instrument assessing the ability to negotiate effectively a range of challenging other-sex social interactions. Development followed the Goldfried and D'Zurilla (1969) behavioral-analytic model for assessing competence.…
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; LFX, Endang Widjajanti; Suyanta
2017-05-01
This research aims to develop instrument and determine the characteristics of an integrated assessment instrument. This research uses 4-D model, which includes define, design, develop, and disseminate. The primary product is validated by expert judgment, tested it's readability by students, and assessed it's feasibility by chemistry teachers. This research involved 246 students of grade XI of four senior high schools in Yogyakarta, Indonesia. Data collection techniques include interview, questionnaire, and test. Data collection instruments include interview guideline, item validation sheet, users' response questionnaire, instrument readability questionnaire, and essay test. The results show that the integrated assessment instrument has Aiken validity value of 0.95. Item reliability was 0.99 and person reliability was 0.69. Teachers' response to the integrated assessment instrument is very good. Therefore, the integrated assessment instrument is feasible to be applied to measure the students' analytical thinking and science process skills.
Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.
Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H
2014-01-01
Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.
Ekwunife, Obinna I; Grote, Andreas Gerber; Mosch, Christoph; O'Mahony, James F; Lhachimi, Stefan K
2015-05-12
Cervical cancer poses a huge health burden, both to developed and developing nations, making prevention and control strategies necessary. However, the challenges of designing and implementing prevention strategies differ for low- and middle-income countries (LMICs) as compared to countries with fully developed health care systems. Moreover, for many LMICs, much of the data needed for decision analytic modelling, such as prevalence, will most likely only be partly available or measured with much larger uncertainty. Lastly, imperfect implementation of human papillomavirus (HPV) vaccination may influence the effectiveness of cervical cancer prevention in unpredictable ways. This systematic review aims to assess how decision analytic modelling studies of HPV cost-effectiveness in LMICs accounted for the particular challenges faced in such countries. Specifically, the study will assess the following: (1) whether the existing literature on cost-effectiveness modelling of HPV vaccines acknowledges the distinct challenges of LMICs, (2) how these challenges were accommodated in the models, (3) whether certain parameters systemically exhibited large degrees of uncertainty due to lack of data and how influential were these parameters on model-based recommendations, and (4) whether the choice of modelling herd immunity influences model-based recommendations, especially when coverage of a HPV vaccination program is not optimal. We will conduct a systematic review to identify suitable studies from MEDLINE (via PubMed), EMBASE, NHS Economic Evaluation Database (NHS EED), EconLit, Web of Science, and CEA Registry. Searches will be conducted for studies of interest published since 2006. The searches will be supplemented by hand searching of the most relevant papers found in the search. Studies will be critically appraised using Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement checklist. We will undertake a descriptive, narrative, and interpretative synthesis of data to address the study objectives. The proposed systematic review will assess how the cost-effectiveness studies of HPV vaccines accounted for the distinct challenges of LMICs. The gaps identified will expose areas for additional research as well as challenges that need to be accounted for in future modelling studies. PROSPERO CRD42015017870.
NASA Astrophysics Data System (ADS)
Buddendorf, B.; Fabris, L.; Malcolm, I.; Lazzaro, G.; Tetzlaff, D.; Botter, G.; Soulsby, C.
2016-12-01
Wild Atlantic salmon populations in Scottish rivers constitute an important economic and recreational resource, as well as being a key component of biodiversity. Salmon have specific habitat requirements at different life stages and their distribution is therefore strongly influenced by a complex suite of biological and physical controls. Stream hydrodynamics have a strong influence on habitat quality and affect the distribution and density of juvenile salmon. As stream hydrodynamics directly relate to stream flow variability and channel morphology, the effects of hydroclimatic drivers on the spatial and temporal variability of habitat suitability can be assessed. Critical Displacement Velocity (CDV), which describes the velocity at which fish can no longer hold station, is one potential approach for characterising habitat suitability. CDV is obtained using an empirical formula that depends on fish size and stream temperature. By characterising the proportion of a reach below CDV it is possible to assess the suitable area. We demonstrate that a generic analytical approach based on field survey and hydraulic modelling can provide insights on the interactions between flow regime and average suitable area (SA) for juvenile salmon that could be extended to other aquatic species. Analytical functions are used to model the pdf of stream flow p(q) and the relationship between flow and suitable area SA(q). Theoretically these functions can assume any form. Here we used a gamma distribution to model p(q) and a gamma function to model SA(q). Integrating the product of these functions we obtain an analytical expression of SA. Since parameters of p(q) can be estimated from meteorological and flow measurements, they can be used directly to predict the effect of flow regime on SA. We show the utility of the approach with reference to 6 electrofishing sites in a single river system where long term (50 years) data on spatially distributed juvenile salmon densities are available.
Development of computer-based analytical tool for assessing physical protection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardhi, Alim, E-mail: alim-m@batan.go.id; Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330; Pengvanich, Phongphaeth, E-mail: ppengvan@gmail.com
Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work,more » we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.« less
Development of computer-based analytical tool for assessing physical protection system
NASA Astrophysics Data System (ADS)
Mardhi, Alim; Pengvanich, Phongphaeth
2016-01-01
Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.
Analytical and simulator study of advanced transport
NASA Technical Reports Server (NTRS)
Levison, W. H.; Rickard, W. W.
1982-01-01
An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.
Verification of Decision-Analytic Models for Health Economic Evaluations: An Overview.
Dasbach, Erik J; Elbasha, Elamin H
2017-07-01
Decision-analytic models for cost-effectiveness analysis are developed in a variety of software packages where the accuracy of the computer code is seldom verified. Although modeling guidelines recommend using state-of-the-art quality assurance and control methods for software engineering to verify models, the fields of pharmacoeconomics and health technology assessment (HTA) have yet to establish and adopt guidance on how to verify health and economic models. The objective of this paper is to introduce to our field the variety of methods the software engineering field uses to verify that software performs as expected. We identify how many of these methods can be incorporated in the development process of decision-analytic models in order to reduce errors and increase transparency. Given the breadth of methods used in software engineering, we recommend a more in-depth initiative to be undertaken (e.g., by an ISPOR-SMDM Task Force) to define the best practices for model verification in our field and to accelerate adoption. Establishing a general guidance for verifying models will benefit the pharmacoeconomics and HTA communities by increasing accuracy of computer programming, transparency, accessibility, sharing, understandability, and trust of models.
ERIC Educational Resources Information Center
Barboza, Gustavo A.; Pesek, James
2012-01-01
Assessment of the business curriculum and its learning goals and objectives has become a major field of interest for business schools. The exploratory results of the authors' model using a sample of 173 students show robust support for the hypothesis that high marks in course-embedded assessment on business-specific analytical skills positively…
Analysis of gene network robustness based on saturated fixed point attractors
2014-01-01
The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364
Roberts, James H.; Anderson, Gregory B.; Angermeier, Paul
2016-01-01
Projects to assess environmental impact or restoration success in rivers focus on project-specific questions but can also provide valuable insights for future projects. Both restoration actions and impact assessments can become “adaptive” by using the knowledge gained from long-term monitoring and analysis to revise the actions, monitoring, conceptual model, or interpretation of findings so that subsequent actions or assessments are better informed. Assessments of impact or restoration success are especially challenging when the indicators of interest are imperiled species and/or the impacts being addressed are complex. From 1997 to 2015, we worked closely with two federal agencies to monitor habitat availability for and population density of Roanoke logperch (Percina rex), an endangered fish, in a 24-km-long segment of the upper Roanoke River, VA. We primarily used a Before-After-Control-Impact analytical framework to assess potential impacts of a river channelization project on the P. rex population. In this paper, we summarize how our extensive monitoring facilitated the evolution of our (a) conceptual understanding of the ecosystem and fish population dynamics; (b) choices of ecological indicators and analytical tools; and (c) conclusions regarding the magnitude, mechanisms, and significance of observed impacts. Our experience with this case study taught us important lessons about how to adaptively develop and conduct a monitoring program, which we believe are broadly applicable to assessments of environmental impact and restoration success in other rivers. In particular, we learned that (a) pre-treatment planning can enhance monitoring effectiveness, help avoid unforeseen pitfalls, and lead to more robust conclusions; (b) developing adaptable conceptual and analytical models early was crucial to organizing our knowledge, guiding our study design, and analyzing our data; (c) catchment-wide processes that we did not monitor, or initially consider, had profound implications for interpreting our findings; and (d) using multiple analytical frameworks, with varying assumptions, led to clearer interpretation of findings than the use of a single framework alone. Broader integration of these guiding principles into monitoring studies, though potentially challenging, could lead to more scientifically defensible assessments of project effects.
Replica Analysis for Portfolio Optimization with Single-Factor Model
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
2013-01-01
Background Proper evaluation of new diagnostic tests is required to reduce overutilization and to limit potential negative health effects and costs related to testing. A decision analytic modelling approach may be worthwhile when a diagnostic randomized controlled trial is not feasible. We demonstrate this by assessing the cost-effectiveness of modified transesophageal echocardiography (TEE) compared with manual palpation for the detection of atherosclerosis in the ascending aorta. Methods Based on a previous diagnostic accuracy study, actual Dutch reimbursement data, and evidence from literature we developed a Markov decision analytic model. Cost-effectiveness of modified TEE was assessed for a life time horizon and a health care perspective. Prevalence rates of atherosclerosis were age-dependent and low as well as high rates were applied. Probabilistic sensitivity analysis was applied. Results The model synthesized all available evidence on the risk of stroke in cardiac surgery patients. The modified TEE strategy consistently resulted in more adapted surgical procedures and, hence, a lower risk of stroke and a slightly higher number of life-years. With 10% prevalence of atherosclerosis the incremental cost-effectiveness ratio was €4,651 and €481 per quality-adjusted life year in 55-year-old men and women, respectively. In all patients aged 65 years or older the modified TEE strategy was cost saving and resulted in additional health benefits. Conclusions Decision analytic modelling to assess the cost-effectiveness of a new diagnostic test based on characteristics, costs and effects of the test itself and of the subsequent treatment options is both feasible and valuable. Our case study on modified TEE suggests that it may reduce the risk of stroke in cardiac surgery patients older than 55 years at acceptable cost-effectiveness levels. PMID:23368927
Development of a category 2 approach system model
NASA Technical Reports Server (NTRS)
Johnson, W. A.; Mcruer, D. T.
1972-01-01
An analytical model is presented which provides, as its primary output, the probability of a successful Category II approach. Typical applications are included using several example systems (manual and automatic) which are subjected to random gusts and deterministic wind shear. The primary purpose of the approach system model is to establish a structure containing the system elements, command inputs, disturbances, and their interactions in an analytical framework so that the relative effects of changes in the various system elements on precision of control and available margins of safety can be estimated. The model is intended to provide insight for the design and integration of suitable autopilot, display, and navigation elements; and to assess the interaction of such elements with the pilot/copilot.
Limitations of bootstrap current models
Belli, Emily A.; Candy, Jefferey M.; Meneghini, Orso; ...
2014-03-27
We assess the accuracy and limitations of two analytic models of the tokamak bootstrap current: (1) the well-known Sauter model and (2) a recent modification of the Sauter model by Koh et al. For this study, we use simulations from the first-principles kinetic code NEO as the baseline to which the models are compared. Tests are performed using both theoretical parameter scans as well as core- to-edge scans of real DIII-D and NSTX plasma profiles. The effects of extreme aspect ratio, large impurity fraction, energetic particles, and high collisionality are studied. In particular, the error in neglecting cross-species collisional couplingmore » – an approximation inherent to both analytic models – is quantified. Moreover, the implications of the corrections from kinetic NEO simulations on MHD equilibrium reconstructions is studied via integrated modeling with kinetic EFIT.« less
Methodology for assessing the safety of Hydrogen Systems: HyRAM 1.1 technical reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groth, Katrina; Hecht, Ethan; Reynolds, John Thomas
The HyRAM software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen infrastructure and transportation systems. HyRAM is designed to facilitate the use of state-of-the-art science and engineering models to conduct robust, repeatable assessments of hydrogen safety, hazards, and risk. HyRAM is envisioned as a unifying platform combining validated, analytical models of hydrogen behavior, a stan- dardized, transparent QRA approach, and engineering models and generic data for hydrogen installations. HyRAM is being developed at Sandia National Laboratories for the U. S. De- partment of Energy to increase access to technical data about hydrogen safety andmore » to enable the use of that data to support development and revision of national and international codes and standards. This document provides a description of the methodology and models contained in the HyRAM version 1.1. HyRAM 1.1 includes generic probabilities for hydrogen equipment fail- ures, probabilistic models for the impact of heat flux on humans and structures, and computa- tionally and experimentally validated analytical and first order models of hydrogen release and flame physics. HyRAM 1.1 integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing hydrogen hazards (thermal effects from jet fires, overpressure effects from deflagrations), and assessing impact on people and structures. HyRAM is a prototype software in active development and thus the models and data may change. This report will be updated at appropriate developmental intervals.« less
Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario
NASA Astrophysics Data System (ADS)
Tobias, Guillermo; Jesús García, Adrián
2016-04-01
The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of taking the physical interactions between satellite and solar radiation into account in an empirical model with respect to a pure empirical model.
Expressivism, Relativism, and the Analytic Equivalence Test
Frápolli, Maria J.; Villanueva, Neftalí
2015-01-01
The purpose of this paper is to show that, pace (Field, 2009), MacFarlane’s assessment relativism and expressivism should be sharply distinguished. We do so by arguing that relativism and expressivism exemplify two very different approaches to context-dependence. Relativism, on the one hand, shares with other contemporary approaches a bottom–up, building block, model, while expressivism is part of a different tradition, one that might include Lewis’ epistemic contextualism and Frege’s content individuation, with which it shares an organic model to deal with context-dependence. The building-block model and the organic model, and thus relativism and expressivism, are set apart with the aid of a particular test: only the building-block model is compatible with the idea that there might be analytically equivalent, and yet different, propositions. PMID:26635690
Sam Rossman; Charles B. Yackulic; Sarah P. Saunders; Janice Reid; Ray Davis; Elise F. Zipkin
2016-01-01
Occupancy modeling is a widely used analytical technique for assessing species distributions and range dynamics. However, occupancy analyses frequently ignore variation in abundance of occupied sites, even though site abundances affect many of the parameters being estimated (e.g., extinction, colonization, detection probability). We introduce a new model (âdynamic
2009-09-01
nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models
Review and assessment of the database and numerical modeling for turbine heat transfer
NASA Technical Reports Server (NTRS)
Gladden, H. J.; Simoneau, R. J.
1989-01-01
The objectives of the NASA Hot Section Technology (HOST) Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.
Schaafsma, Joanna D; van der Graaf, Yolanda; Rinkel, Gabriel J E; Buskens, Erik
2009-12-01
The lack of a standard methodology in diagnostic research impedes adequate evaluation before implementation of constantly developing diagnostic techniques. We discuss the methodology of diagnostic research and underscore the relevance of decision analysis in the process of evaluation of diagnostic tests. Overview and conceptual discussion. Diagnostic research requires a stepwise approach comprising assessment of test characteristics followed by evaluation of added value, clinical outcome, and cost-effectiveness. These multiple goals are generally incompatible with a randomized design. Decision-analytic models provide an important alternative through integration of the best available evidence. Thus, critical assessment of clinical value and efficient use of resources can be achieved. Decision-analytic models should be considered part of the standard methodology in diagnostic research. They can serve as a valid alternative to diagnostic randomized clinical trials (RCTs).
IT vendor selection model by using structural equation model & analytical hierarchy process
NASA Astrophysics Data System (ADS)
Maitra, Sarit; Dominic, P. D. D.
2012-11-01
Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.
Environmental probabilistic quantitative assessment methodologies
Crovelli, R.A.
1995-01-01
In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author
Performance Assessment in Serious Games: Compensating for the Effects of Randomness
ERIC Educational Resources Information Center
Westera, Wim
2016-01-01
This paper is about performance assessment in serious games. We conceive serious gaming as a process of player-lead decision taking. Starting from combinatorics and item-response theory we provide an analytical model that makes explicit to what extent observed player performances (decisions) are blurred by chance processes (guessing behaviors). We…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Optis, Michael; Scott, George N.; Draxl, Caroline
The goal of this analysis was to assess the wind power forecast accuracy of the Vermont Weather Analytics Center (VTWAC) forecast system and to identify potential improvements to the forecasts. Based on the analysis at Georgia Mountain, the following recommendations for improving forecast performance were made: 1. Resolve the significant negative forecast bias in February-March 2017 (50% underprediction on average) 2. Improve the ability of the forecast model to capture the strong diurnal cycle of wind power 3. Add ability for forecast model to assess internal wake loss, particularly at sites where strong diurnal shifts in wind direction are present.more » Data availability and quality limited the robustness of this forecast assessment. A more thorough analysis would be possible given a longer period of record for the data (at least one full year), detailed supervisory control and data acquisition data for each wind plant, and more detailed information on the forecast system input data and methodologies.« less
A model system to mimic environmentally active surface film roughness and hydrophobicity.
Grant, Jacob S; Shaw, Scott K
2017-10-01
This work presents the development and initial assessment of a laboratory platform to allow quantitative studies on model urban films. The platform consists of stearic acid and eicosane mixtures that are solution deposited from hexanes onto smooth, solid substrates. We show that this model has distinctive capabilities to better mimic a naturally occurring film's morphology and hydrophobicity, two important parameters that have not previously been incorporated into model film systems. The physical and chemical properties of the model films are assessed using a variety of analytical instruments. The film thickness and roughness are probed via atomic force microscopy while the film composition, wettability, and water uptake are analyzed by Fourier transform infrared spectroscopy, contact angle goniometry, and quartz crystal microbalance, respectively. Simulated environmental maturation is achieved by exposing the film to regulated amounts of UV/ozone. Ultimately, oxidation of the film is monitored by the analytical techniques mentioned above and proceeds as expected to produce a utile model film system. Including variable roughness and tunable surface coverage results in several key advantages over prior model systems, and will more accurately represent native urban film behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
Evaluation of simplified stream-aquifer depletion models for water rights administration
Sophocleous, Marios; Koussis, Antonis; Martin, J.L.; Perkins, S.P.
1995-01-01
We assess the predictive accuracy of Glover's (1974) stream-aquifer analytical solutions, which are commonly used in administering water rights, and evaluate the impact of the assumed idealizations on administrative and management decisions. To achieve these objectives, we evaluate the predictive capabilities of the Glover stream-aquifer depletion model against the MODFLOW numerical standard, which, unlike the analytical model, can handle increasing hydrogeologic complexity. We rank-order and quantify the relative importance of the various assumptions on which the analytical model is based, the three most important being: (1) streambed clogging as quantified by streambed-aquifer hydraulic conductivity contrast; (2) degree of stream partial penetration; and (3) aquifer heterogeneity. These three factors relate directly to the multidimensional nature of the aquifer flow conditions. From these considerations, future efforts to reduce the uncertainty in stream depletion-related administrative decisions should primarily address these three factors in characterizing the stream-aquifer process. We also investigate the impact of progressively coarser model grid size on numerically estimating stream leakage and conclude that grid size effects are relatively minor. Therefore, when modeling is required, coarser model grids could be used thus minimizing the input data requirements.
Integrated corridor management analysis, modeling and simulation (AMS) methodology.
DOT National Transportation Integrated Search
2008-03-01
This AMS Methodologies Document provides a discussion of potential ICM analytical approaches for the assessment of generic corridor operations. The AMS framework described in this report identifies strategies and procedures for tailoring AMS general ...
Veyrand, Julien; Marin-Kuan, Maricel; Bezencon, Claudine; Frank, Nancy; Guérin, Violaine; Koster, Sander; Latado, Hélia; Mollergues, Julie; Patin, Amaury; Piguet, Dominique; Serrant, Patrick; Varela, Jesus; Schilter, Benoît
2017-10-01
Food contact materials (FCM) contain chemicals which can migrate into food and result in human exposure. Although it is mandatory to ensure that migration does not endanger human health, there is still no consensus on how to pragmatically assess the safety of FCM since traditional approaches would require extensive toxicological and analytical testing which are expensive and time consuming. Recently, the combination of bioassays, analytical chemistry and risk assessment has been promoted as a new paradigm to identify toxicologically relevant molecules and address safety issues. However, there has been debate on the actual value of bioassays in that framework. In the present work, a FCM anticipated to release the endocrine active chemical 4-nonyphenol (4NP) was used as a model. In a migration study, the leaching of 4NP was confirmed by LC-MS/MS and GC-MS. This was correlated with an increase in both estrogenic and anti-androgenic activities as measured with bioassays. A standard risk assessment indicated that according to the food intake scenario applied, the level of 4NP measured was lower, close or slightly above the acceptable daily intake. Altogether these results show that bioassays could reveal the presence of an endocrine active chemical in a real-case FCM migration study. The levels reported were relevant for safety assessment. In addition, this work also highlighted that bioactivity measured in migrate does not necessarily represent a safety issue. In conclusion, together with analytics, bioassays contribute to identify toxicologically relevant molecules leaching from FCM and enable improved safety assessment.
A Decision Analytic Approach to Exposure-Based Chemical Prioritization
Mitchell, Jade; Pabon, Nicolas; Collier, Zachary A.; Egeghy, Peter P.; Cohen-Hubal, Elaine; Linkov, Igor; Vallero, Daniel A.
2013-01-01
The manufacture of novel synthetic chemicals has increased in volume and variety, but often the environmental and health risks are not fully understood in terms of toxicity and, in particular, exposure. While efforts to assess risks have generally been effective when sufficient data are available, the hazard and exposure data necessary to assess risks adequately are unavailable for the vast majority of chemicals in commerce. The US Environmental Protection Agency has initiated the ExpoCast Program to develop tools for rapid chemical evaluation based on potential for exposure. In this context, a model is presented in which chemicals are evaluated based on inherent chemical properties and behaviorally-based usage characteristics over the chemical’s life cycle. These criteria are assessed and integrated within a decision analytic framework, facilitating rapid assessment and prioritization for future targeted testing and systems modeling. A case study outlines the prioritization process using 51 chemicals. The results show a preliminary relative ranking of chemicals based on exposure potential. The strength of this approach is the ability to integrate relevant statistical and mechanistic data with expert judgment, allowing for an initial tier assessment that can further inform targeted testing and risk management strategies. PMID:23940664
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Jamieson, Joan; Poonpon, Kornwipa
2013-01-01
Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…
Defining Higher-Order Turbulent Moment Closures with an Artificial Neural Network and Random Forest
NASA Astrophysics Data System (ADS)
McGibbon, J.; Bretherton, C. S.
2017-12-01
Unresolved turbulent advection and clouds must be parameterized in atmospheric models. Modern higher-order closure schemes depend on analytic moment closure assumptions that diagnose higher-order moments in terms of lower-order ones. These are then tested against Large-Eddy Simulation (LES) higher-order moment relations. However, these relations may not be neatly analytic in nature. Rather than rely on an analytic higher-order moment closure, can we use machine learning on LES data itself to define a higher-order moment closure?We assess the ability of a deep artificial neural network (NN) and random forest (RF) to perform this task using a set of observationally-based LES runs from the MAGIC field campaign. By training on a subset of 12 simulations and testing on remaining simulations, we avoid over-fitting the training data.Performance of the NN and RF will be assessed and compared to the Analytic Double Gaussian 1 (ADG1) closure assumed by Cloudy Layers Unified By Binormals (CLUBB), a higher-order turbulence closure currently used in the Community Atmosphere Model (CAM). We will show that the RF outperforms the NN and the ADG1 closure for the MAGIC cases within this diagnostic framework. Progress and challenges in using a diagnostic machine learning closure within a prognostic cloud and turbulence parameterization will also be discussed.
FASP, an analytic resource appraisal program for petroleum play analysis
Crovelli, R.A.; Balay, R.H.
1986-01-01
An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented in a FORTRAN program termed FASP. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An established geologic model considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The program FASP produces resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and many laws of expectation and variance. ?? 1986.
Warfighter decision making performance analysis as an investment priority driver
NASA Astrophysics Data System (ADS)
Thornley, David J.; Dean, David F.; Kirk, James C.
2010-04-01
Estimating the relative value of alternative tactics, techniques and procedures (TTP) and information systems requires measures of the costs and benefits of each, and methods for combining and comparing those measures. The NATO Code of Best Practice for Command and Control Assessment explains that decision making quality would ideally be best assessed on outcomes. Lessons learned in practice can be assessed statistically to support this, but experimentation with alternate measures in live conflict is undesirable. To this end, the development of practical experimentation to parameterize effective constructive simulation and analytic modelling for system utility prediction is desirable. The Land Battlespace Systems Department of Dstl has modeled human development of situational awareness to support constructive simulation by empirically discovering how evidence is weighed according to circumstance, personality, training and briefing. The human decision maker (DM) provides the backbone of the information processing activity associated with military engagements because of inherent uncertainty associated with combat operations. To develop methods for representing the process in order to assess equipment and non-technological interventions such as training and TTPs we are developing componentized or modularized timed analytic stochastic model components and instruments as part of a framework to support quantitative assessment of intelligence production and consumption methods in a human decision maker-centric mission space. In this paper, we formulate an abstraction of the human intelligence fusion process from the Defence Science and Technology Laboratory's (Dstl's) INCIDER model to include in our framework, and synthesize relevant cost and benefit characteristics.
Alcohol expectancy multiaxial assessment: a memory network-based approach.
Goldman, Mark S; Darkes, Jack
2004-03-01
Despite several decades of activity, alcohol expectancy research has yet to merge measurement approaches with developing memory theory. This article offers an expectancy assessment approach built on a conceptualization of expectancy as an information processing network. The authors began with multidimensional scaling models of expectancy space, which served as heuristics to suggest confirmatory factor analytic dimensional models for entry into covariance structure predictive models. It is argued that this approach permits a relatively thorough assessment of the broad range of potential expectancy dimensions in a format that is very flexible in terms of instrument length and specificity versus breadth of focus. ((c) 2004 APA, all rights reserved)
Modeling Selection and Extinction Mechanisms of Biological Systems
NASA Astrophysics Data System (ADS)
Amirjanov, Adil
In this paper, the behavior of a genetic algorithm is modeled to enhance its applicability as a modeling tool of biological systems. A new description model for selection mechanism is introduced which operates on a portion of individuals of population. The extinction and recolonization mechanism is modeled, and solving the dynamics analytically shows that the genetic drift in the population with extinction/recolonization is doubled. The mathematical analysis of the interaction between selection and extinction/recolonization processes is carried out to assess the dynamics of motion of the macroscopic statistical properties of population. Computer simulations confirm that the theoretical predictions of described models are in good approximations. A mathematical model of GA dynamics was also examined, which describes the anti-predator vigilance in an animal group with respect to a known analytical solution of the problem, and showed a good agreement between them to find the evolutionarily stable strategies.
Qiu, Chenchen; Li, Yande
2017-01-01
China is a country with vast territory, but economic development and population growth have reduced the usable land resources in recent years. Therefore, reclamation by pumping and filling is carried out in eastern coastal regions of China in order to meet the needs of urbanization. However, large areas of reclaimed land need rapid drainage consolidation treatment. Based on past researches on how to improve the treatment efficiency of soft clay using vacuum preloading combined with electro-osmosis, a two-dimensional drainage plane model was proposed according to the Terzaghi and Esrig consolidation theory. However, the analytical solution using two-dimensional plane model was never involved. Current analytical solutions can’t have a thorough theoretical analysis of practical engineering and give relevant guidance. Considering the smearing effect and the rectangle arrangement pattern, an analytical solution is derived to describe the behavior of pore-water and the consolidation process by using EKG (electro-kinetic geo synthetics) materials. The functions of EKG materials include drainage, electric conduction and corrosion resistance. Comparison with test results is carried out to verify the analytical solution. It is found that the measured value is larger than the applied vacuum degree because of the stacking effect of the vacuum preloading and electro-osmosis. The trends of the mean measured value and the mean analytical value processes are comparable. Therefore, the consolidation model can accurately assess the change in pore-water pressure and the consolidation process during vacuum preloading combined with electro-osmosis. PMID:28771496
Baciocchi, Renato; Berardi, Simona; Verginelli, Iason
2010-09-15
Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.
A modeling approach to compare ΣPCB concentrations between congener-specific analyses
Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.
2017-01-01
Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time.
The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
Flat-plate solar array project. Volume 8: Project analysis and integration
NASA Technical Reports Server (NTRS)
Mcguire, P.; Henry, P.
1986-01-01
Project Analysis and Integration (PA&I) performed planning and integration activities to support management of the various Flat-Plate Solar Array (FSA) Project R&D activities. Technical and economic goals were established by PA&I for each R&D task within the project to coordinate the thrust toward the National Photovoltaic Program goals. A sophisticated computer modeling capability was developed to assess technical progress toward meeting the economic goals. These models included a manufacturing facility simulation, a photovoltaic power station simulation and a decision aid model incorporating uncertainty. This family of analysis tools was used to track the progress of the technology and to explore the effects of alternative technical paths. Numerous studies conducted by PA&I signaled the achievement of milestones or were the foundation of major FSA project and national program decisions. The most important PA&I activities during the project history are summarized. The PA&I planning function is discussed and how it relates to project direction and important analytical models developed by PA&I for its analytical and assessment activities are reviewed.
Peters, J.G.
1987-01-01
The Indiana Department of Natural Resources (IDNR) is developing water-management policies designed to assess the effects of irrigation and other water uses on water supply in the basin. In support of this effort, the USGS, in cooperation with IDNR, began a study to evaluate appropriate methods for analyzing the effects of pumping on ground-water levels and streamflow in the basin 's glacial aquifer systems. Four analytical models describe drawdown for a nonleaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and partially penetrating well; and an unconfined aquifer and partially penetrating well. Analytical equations, simplifying assumptions, and methods of application are described for each model. In addition to these four models, several other analytical models were used to predict the effects of ground-water pumping on water levels in the aquifer and on streamflow in local areas with up to two pumping wells. Analytical models for a variety of other hydrogeologic conditions are cited. A digital ground-water flow model was used to describe how a numerical model can be applied to a glacial aquifer system. The numerical model was used to predict the effects of six pumping plans in 46.5 sq mi area with as many as 150 wells. Water budgets for the six pumping plans were used to estimate the effect of pumping on streamflow reduction. Results of the analytical and numerical models indicate that, in general, the glacial aquifers in the basin are highly permeable. Radial hydraulic conductivity calculated by the analytical models ranged from 280 to 600 ft/day, compared to 210 and 360 ft/day used in the numerical model. Maximum seasonal pumping for irrigation produced maximum calculated drawdown of only one-fourth of available drawdown and reduced streamflow by as much as 21%. Analytical models are useful in estimating aquifer properties and predicting local effects of pumping in areas with simple lithology and boundary conditions and with few pumping wells. Numerical models are useful in regional areas with complex hydrogeology with many pumping wells and provide detailed water budgets useful for estimating the sources of water in pumping simulations. Numerical models are useful in constructing flow nets. The choice of which type of model to use is also based on the nature and scope of questions to be answered and on the degree of accuracy required. (Author 's abstract)
Disturbance characteristics of half-selected cells in a cross-point resistive switching memory array
NASA Astrophysics Data System (ADS)
Chen, Zhe; Li, Haitong; Chen, Hong-Yu; Chen, Bing; Liu, Rui; Huang, Peng; Zhang, Feifei; Jiang, Zizhen; Ye, Hongfei; Gao, Bin; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng; Wong, H.-S. Philip; Yu, Shimeng
2016-05-01
Disturbance characteristics of cross-point resistive random access memory (RRAM) arrays are comprehensively studied in this paper. An analytical model is developed to quantify the number of pulses (#Pulse) the cell can bear before disturbance occurs under various sub-switching voltage stresses based on physical understanding. An evaluation methodology is proposed to assess the disturb behavior of half-selected (HS) cells in cross-point RRAM arrays by combining the analytical model and SPICE simulation. The characteristics of cross-point RRAM arrays such as energy consumption, reliable operating cycles and total error bits are evaluated by the methodology. A possible solution to mitigate disturbance is proposed.
ERIC Educational Resources Information Center
Li, Tiandong
2012-01-01
In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…
Two-factor theory – at the intersection of health care management and patient satisfaction
Bohm, Josef
2012-01-01
Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg’s motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants’ self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables. PMID:23055755
Two-factor theory - at the intersection of health care management and patient satisfaction.
Bohm, Josef
2012-01-01
Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg's motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants' self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables.
An analytical solution for predicting the transient seepage from a subsurface drainage system
NASA Astrophysics Data System (ADS)
Xin, Pei; Dan, Han-Cheng; Zhou, Tingzhang; Lu, Chunhui; Kong, Jun; Li, Ling
2016-05-01
Subsurface drainage systems have been widely used to deal with soil salinization and waterlogging problems around the world. In this paper, a mathematical model was introduced to quantify the transient behavior of the groundwater table and the seepage from a subsurface drainage system. Based on the assumption of a hydrostatic pressure distribution, the model considered the pore-water flow in both the phreatic and vadose soil zones. An approximate analytical solution for the model was derived to quantify the drainage of soils which were initially water-saturated. The analytical solution was validated against laboratory experiments and a 2-D Richards equation-based model, and found to predict well the transient water seepage from the subsurface drainage system. A saturated flow-based model was also tested and found to over-predict the time required for drainage and the total water seepage by nearly one order of magnitude, in comparison with the experimental results and the present analytical solution. During drainage, a vadose zone with a significant water storage capacity developed above the phreatic surface. A considerable amount of water still remained in the vadose zone at the steady state with the water table situated at the drain bottom. Sensitivity analyses demonstrated that effects of the vadose zone were intensified with an increased thickness of capillary fringe, capillary rise and/or burying depth of drains, in terms of the required drainage time and total water seepage. The analytical solution provides guidance for assessing the capillary effects on the effectiveness and efficiency of subsurface drainage systems for combating soil salinization and waterlogging problems.
NASA Technical Reports Server (NTRS)
Saravanos, Dimitris A.
1997-01-01
The development of aeropropulsion components that incorporate "smart" composite laminates with embedded piezoelectric actuators and sensors is expected to ameliorate critical problems in advanced aircraft engines related to vibration, noise emission, and thermal stability. To facilitate the analytical needs of this effort, the NASA Lewis Research Center has developed mechanics and multidisciplinary computational models to analyze the complicated electromechanical behavior of realistic smart-structure configurations operating in combined mechanical, thermal, and acoustic environments. The models have been developed to accommodate the particular geometries, environments, and technical challenges encountered in advanced aircraft engines, yet their unique analytical features are expected to facilitate application of this new technology in a variety of commercial applications.
An overview of selected NASP aeroelastic studies at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Spain, Charles V.; Soistmann, David L.; Parker, Ellen C.; Gibbons, Michael D.; Gilbert, Michael G.
1990-01-01
Following an initial discussion of the NASP flight environment, the results of recent aeroelastic testing of NASP-type highly swept delta-wing models in Langley's Transonic Dynamics Tunnel (TDT) are summarized. Subsonic and transonic flutter characteristics of a variety of these models are described, and several analytical codes used to predict flutter of these models are evaluated. These codes generally provide good, but conservative predictions of subsonic and transonic flutter. Also, test results are presented on a nonlinear transonic phenomena known as aileron buzz which occurred in the wind tunnel on highly swept delta wings with full-span ailerons. An analytical procedure which assesses the effects of hypersonic heating on aeroelastic instabilities (aerothermoelasticity) is also described. This procedure accurately predicted flutter of a heated aluminum wing on which experimental data exists. Results are presented on the application of this method to calculate the flutter characteristics of a fine-element model of a generic NASP configuration. Finally, it is demonstrated analytically that active controls can be employed to improve the aeroelastic stability and ride quality of a generic NASP vehicle flying at hypersonic speeds.
Analytical method of waste allocation in waste management systems: Concept, method and case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergeron, Francis C., E-mail: francis.b.c@videotron.ca
Waste is not a rejected item to dispose anymore but increasingly a secondary resource to exploit, influencing waste allocation among treatment operations in a waste management (WM) system. The aim of this methodological paper is to present a new method for the assessment of the WM system, the “analytical method of the waste allocation process” (AMWAP), based on the concept of the “waste allocation process” defined as the aggregation of all processes of apportioning waste among alternative waste treatment operations inside or outside the spatial borders of a WM system. AMWAP contains a conceptual framework and an analytical approach. Themore » conceptual framework includes, firstly, a descriptive model that focuses on the description and classification of the WM system. It includes, secondly, an explanatory model that serves to explain and to predict the operation of the WM system. The analytical approach consists of a step-by-step analysis for the empirical implementation of the conceptual framework. With its multiple purposes, AMWAP provides an innovative and objective modular method to analyse a WM system which may be integrated in the framework of impact assessment methods and environmental systems analysis tools. Its originality comes from the interdisciplinary analysis of the WAP and to develop the conceptual framework. AMWAP is applied in the framework of an illustrative case study on the household WM system of Geneva (Switzerland). It demonstrates that this method provides an in-depth and contextual knowledge of WM. - Highlights: • The study presents a new analytical method based on the waste allocation process. • The method provides an in-depth and contextual knowledge of the waste management system. • The paper provides a reproducible procedure for professionals, experts and academics. • It may be integrated into impact assessment or environmental system analysis tools. • An illustrative case study is provided based on household waste management in Geneva.« less
TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G; Yuan, A; Wei, J
2014-06-15
Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation.more » The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships, requires no training, and therefore is potentially more resilient to breathing irregularities. On-going investigation introduces airflow into the RMP model for improvement. This research is in part supported by NIH (U54CA137788/132378). AY would like to thank MSKCC summer medical student research program supported by National Cancer Institute and hosted by Department of Medical Physics at MSKCC.« less
NASA Astrophysics Data System (ADS)
Parinov, A. V.; Korotkikh, L. P.; Desyatov, D. B.; Stepanov, L. V.
2018-03-01
The uniqueness of information processing mechanisms in special-purpose infocommunication systems and the increased interest of intruders lead to an increase in the relevance of the problems associated with their protection. The paper considers the issues of building risk-models for the violation of the relevance and value of information in infocommunication systems for special purposes. Also, special attention is paid to the connection between the qualities of relevance and the value of information obtained as a result of the operation of infocommunication systems for special purposes. Analytical expressions for the risk and damage function in the time range in special-purpose infocommunication systems are obtained, which can serve as a mathematical basis for risk assessment. Further, an analytical expression is obtained to assess the chance of obtaining up-to-date information in the operation of infocommunication systems up to the time the information quality is violated. An analytical expression for estimating the chance can be used to calculate the effectiveness of a special-purpose infocommunication system.
Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putney, J.M.; Preece, R.J.
1993-06-01
An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less
Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil
2012-01-01
Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.
Assessing data and modeling needs for urban transport : an Australian perspective
DOT National Transportation Integrated Search
2000-04-01
Managing the transport assets of an urban economy and ensuring that change is in accordance with suitable performance measures requires continuing improvement in analytical power and empirical information. One crucial input for improving planning and...
Zechmeister-Koss, Ingrid; Schnell-Inderst, Petra; Zauner, Günther
2014-04-01
An increasing number of evidence sources are relevant for populating decision analytic models. What is needed is detailed methodological advice on which type of data is to be used for what type of model parameter. We aim to identify standards in health technology assessment manuals and economic (modeling) guidelines on appropriate evidence sources and on the role different types of data play within a model. Documents were identified via a call among members of the International Network of Agencies for Health Technology Assessment and by hand search. We included documents from Europe, the United States, Canada, Australia, and New Zealand as well as transnational guidelines written in English or German. We systematically summarized in a narrative manner information on appropriate evidence sources for model parameters, their advantages and limitations, data identification methods, and data quality issues. A large variety of evidence sources for populating models are mentioned in the 28 documents included. They comprise research- and non-research-based sources. Valid and less appropriate sources are identified for informing different types of model parameters, such as clinical effect size, natural history of disease, resource use, unit costs, and health state utility values. Guidelines do not provide structured and detailed advice on this issue. The article does not include information from guidelines in languages other than English or German, and the information is not tailored to specific modeling techniques. The usability of guidelines and manuals for modeling could be improved by addressing the issue of evidence sources in a more structured and comprehensive format.
Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints
Thompson, John R; Spata, Enti; Abrams, Keith R
2015-01-01
We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing–remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions. PMID:26271918
Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints.
Bujkiewicz, Sylwia; Thompson, John R; Spata, Enti; Abrams, Keith R
2017-10-01
We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing-remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions.
Durning, Steven J; Graner, John; Artino, Anthony R; Pangaro, Louis N; Beckman, Thomas; Holmboe, Eric; Oakes, Terrance; Roy, Michael; Riedy, Gerard; Capaldi, Vincent; Walter, Robert; van der Vleuten, Cees; Schuwirth, Lambert
2012-09-01
Clinical reasoning is essential to medical practice, but because it entails internal mental processes, it is difficult to assess. Functional magnetic resonance imaging (fMRI) and think-aloud protocols may improve understanding of clinical reasoning as these methods can more directly assess these processes. The objective of our study was to use a combination of fMRI and think-aloud procedures to examine fMRI correlates of a leading theoretical model in clinical reasoning based on experimental findings to date: analytic (i.e., actively comparing and contrasting diagnostic entities) and nonanalytic (i.e., pattern recognition) reasoning. We hypothesized that there would be functional neuroimaging differences between analytic and nonanalytic reasoning theory. 17 board-certified experts in internal medicine answered and reflected on validated U.S. Medical Licensing Exam and American Board of Internal Medicine multiple-choice questions (easy and difficult) during an fMRI scan. This procedure was followed by completion of a formal think-aloud procedure. fMRI findings provide some support for the presence of analytic and nonanalytic reasoning systems. Statistically significant activation of prefrontal cortex distinguished answering incorrectly versus correctly (p < 0.01), whereas activation of precuneus and midtemporal gyrus distinguished not guessing from guessing (p < 0.01). We found limited fMRI evidence to support analytic and nonanalytic reasoning theory, as our results indicate functional differences with correct vs. incorrect answers and guessing vs. not guessing. However, our findings did not suggest one consistent fMRI activation pattern of internal medicine expertise. This model of employing fMRI correlates offers opportunities to enhance our understanding of theory, as well as improve our teaching and assessment of clinical reasoning, a key outcome of medical education.
Application of capability indices and control charts in the analytical method control strategy.
Oliva, Alexis; Llabres Martinez, Matías
2017-08-01
In this study, we assessed the usefulness of control charts in combination with the process capability indices, C pm and C pk , in the control strategy of an analytical method. The traditional X-chart and moving range chart were used to monitor the analytical method over a 2-year period. The results confirmed that the analytical method is in-control and stable. Different criteria were used to establish the specifications limits (i.e. analyst requirements) for fixed method performance (i.e. method requirements). If the specification limits and control limits are equal in breadth, the method can be considered "capable" (C pm = 1), but it does not satisfy the minimum method capability requirements proposed by Pearn and Shu (2003). Similar results were obtained using the C pk index. The method capability was also assessed as a function of method performance for fixed analyst requirements. The results indicate that the method does not meet the requirements of the analytical target approach. A real-example data of a SEC with light-scattering detection method was used as a model whereas previously published data were used to illustrate the applicability of the proposed approach. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tshitenge, Dieudonné Tshitenge; Ioset, Karine Ndjoko; Lami, José Nzunzu; Ndelo-di-Phanzu, Josaphat; Mufusama, Jean-Pierre Koy Sita; Bringmann, Gerhard
2016-04-01
Herbal medicines are the most globally used type of medical drugs. Their high cultural acceptability is due to the experienced safety and efficiency over centuries of use. Many of them are still phytochemically less-investigated, and are used without standardization or quality control. Choosing SIROP KILMA, an authorized Congolese antimalarial phytomedicine, as a model case, our study describes an interdisciplinary approach for a rational quality assessment of herbal drugs in general. It combines an authentication step of the herbal remedy prior to any fingerprinting, the isolation of the major constituents, the development and validation of an HPLC-DAD analytical method with internal markers, and the application of the method to several batches of the herbal medicine (here KILMA) thus permitting the establishment of a quantitative fingerprint. From the constitutive plants of KILMA, acteoside, isoacteoside, stachannin A, and pectolinarigenin-7-O-glucoside were isolated, and acteoside was used as the prime marker for the validation of an analytical method. This study contributes to the efforts of the WHO for the establishment of standards enabling the analytical evaluation of herbal materials. Moreover, the paper describes the first phytochemical and analytical report on a marketed Congolese phytomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.
Integrating Computer Content into Social Work Curricula: A Model for Planning
ERIC Educational Resources Information Center
Beaulaurier, Richard L.
2005-01-01
While recent CSWE standards focus on the need for including more relevant technological content in social work curricula, they do not offer guidance regarding how it is to be assessed and selected. Social work educators are in need of an analytic model of computerization to help them understand which technologies are most appropriate and relevant…
USDA-ARS?s Scientific Manuscript database
Non-point source pollution from agricultural fields is a critical problem associated with water quality impairment in the USA and a low-oxygen environment in the Gulf of Mexico. The use, development and enhancement of qualitative and quantitative models or tools for assessing agricultural runoff qua...
Tomaselli Muensterman, Elena; Tisdale, James E
2018-06-08
Prolongation of the heart rate-corrected QT (QTc) interval increases the risk for torsades de pointes (TdP), a potentially fatal arrhythmia. The likelihood of TdP is higher in patients with risk factors, which include female sex, older age, heart failure with reduced ejection fraction, hypokalemia, hypomagnesemia, concomitant administration of ≥ 2 QTc interval-prolonging medications, among others. Assessment and quantification of risk factors may facilitate prediction of patients at highest risk for developing QTc interval prolongation and TdP. Investigators have utilized the field of predictive analytics, which generates predictions using techniques including data mining, modeling, machine learning, and others, to develop methods of risk quantification and prediction of QTc interval prolongation. Predictive analytics have also been incorporated into clinical decision support (CDS) tools to alert clinicians regarding patients at increased risk of developing QTc interval prolongation. The objectives of this paper are to assess the effectiveness of predictive analytics for identification of patients at risk of drug-induced QTc interval prolongation, and to discuss the efficacy of incorporation of predictive analytics into CDS tools in clinical practice. A systematic review of English language articles (human subjects only) was performed, yielding 57 articles, with an additional 4 articles identified from other sources; a total of 10 articles were included in this review. Risk scores for QTc interval prolongation have been developed in various patient populations including those in cardiac intensive care units (ICUs) and in broader populations of hospitalized or health system patients. One group developed a risk score that includes information regarding genetic polymorphisms; this score significantly predicted TdP. Development of QTc interval prolongation risk prediction models and incorporation of these models into CDS tools reduces the risk of QTc interval prolongation in cardiac ICUs and identifies health-system patients at increased risk for mortality. The impact of these QTc interval prolongation predictive analytics on overall patient safety outcomes, such as TdP and sudden cardiac death relative to the cost of development and implementation, requires further study. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Experimental and analytical characterization of triaxially braided textile composites
NASA Technical Reports Server (NTRS)
Masters, John E.; Fedro, Mark J.; Ifju, Peter G.
1993-01-01
There were two components, experimental and analytical, to this investigation of triaxially braided textile composite materials. The experimental portion of the study centered on measuring the materials' longitudinal and transverse tensile moduli, Poisson's ratio, and strengths. The identification of the damage mechanisms exhibited by these materials was also a prime objective of the experimental investigation. The analytical portion of the investigation utilized the Textile Composites Analysis (TECA) model to predict modulus and strength. The analytical and experimental results were compared to assess the effectiveness of the analysis. The figures contained in this paper reflect the presentation made at the conference. They may be divided into four sections: a definition of the material system tested; followed by a series of figures summarizing the experimental results (these figures contain results of a Moire interferometry study of the strain distribution in the material, examples and descriptions of the types of damage encountered in these materials, and a summary of the measured properties); a description of the TECA model follows the experimental results (this includes a series of predicted results and a comparison with measured values); and finally, a brief summary completes the paper.
A theoretical framework for analyzing the effect of external change on tidal dynamics in estuaries
NASA Astrophysics Data System (ADS)
CAI, H.; Savenije, H.; Toffolon, M.
2013-12-01
The most densely populated areas of the world are usually located in coastal areas near estuaries. As a result, estuaries are often subject to intense human interventions, such as dredging for navigation, dam construction and fresh water withdrawal etc., which in some areas has led to serious deterioration of invaluable ecosystems. Hence it is important to understand the influence of such interventions on tidal dynamics in these areas. In this study, we present one consistent theoretical framework for tidal hydrodynamics, which can be used as a rapid assessment technique that assist policy maker and managers to make considered decisions for the protection and management of estuarine environment when assessing the effect of human interventions in estuaries. Analytical solutions to the one-dimensional St. Venant equations for the tidal hydrodynamics in convergent unbounded estuaries with negligible river discharge can be cast in the form of a set of four implicit dimensionless equations for phase lag, velocity amplitude, damping, and wave celerity, as a function of two localized parameters describing friction and convergence. This method allows for the comparison of the different analytical approaches by rewriting the different solutions in the same format. In this study, classical and more recent formulations are compared, showing the differences and similarities associated to their specific simplifications. The envelope method, which is based on the consideration of the dynamics at high water and low water, can be used to derive damping equations that use different friction approximations. This results in as many analytical solutions, and thereby allows one to build a consistent theoretical framework. Analysis of the asymptotic behaviour of the equations shows that an equilibrium tidal amplitude exits reflecting the balance between friction and channel convergence. The framework is subsequently extended to take into account the effect of river discharge. Hence, the analytical solutions are applicable even in the upstream part of an estuary, where the influence of river discharge is remarkable. The proposed analytical solutions are transparent and practical, allowing a quantitative and qualitative assessment of human interventions (e.g., dredging, flow reduction) on tidal dynamics. Moreover, they are rapid assessment techniques that enable the users to set up a simple model and to understand the functioning of the system with a minimum of information required. The analytical model is illustrated in three large-scale estuaries with significant influence by human activities, i.e., the Scheldt estuary in the Netherlands, the Modaomen and the Yangtze estuaries in China. In these estuaries, the correspondence with observations is good, which suggests that the proposed model is a useful, yet realistic and reliable instrument for quick detection of the effect of human interventions on tidal dynamics and subsequent environmental issues, such as salt intrusion.
Albuquerque De Almeida, Fernando; Al, Maiwenn; Koymans, Ron; Caliskan, Kadir; Kerstens, Ankie; Severens, Johan L
2018-04-01
Describing the general and methodological characteristics of decision-analytical models used in the economic evaluation of early warning systems for the management of chronic heart failure patients and performing a quality assessment of their methodological characteristics is expected to provide concise and useful insight to inform the future development of decision-analytical models in the field of heart failure management. Areas covered: The literature on decision-analytical models for the economic evaluation of early warning systems for the management of chronic heart failure patients was systematically reviewed. Nine electronic databases were searched through the combination of synonyms for heart failure and sensitive filters for cost-effectiveness and early warning systems. Expert commentary: The retrieved models show some variability with regards to their general study characteristics. Overall, they display satisfactory methodological quality, even though some points could be improved, namely on the consideration and discussion of any competing theories regarding model structure and disease progression, identification of key parameters and the use of expert opinion, and uncertainty analyses. A comprehensive definition of early warning systems and further research under this label should be pursued. To improve the transparency of economic evaluation publications, authors should make available detailed technical information regarding the published models.
von Oertzen, Timo; Brandmaier, Andreas M
2013-06-01
Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Assessment of analytical techniques for predicting solid propellant exhaust plumes
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.
1977-01-01
The calculation of solid propellant exhaust plume flow fields is addressed. Two major areas covered are: (1) the applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size and particle size distributions, and (2) thermochemical modeling of the gaseous phase of the flow field. Comparisons of experimentally measured and analytically predicted data are made. The experimental data were obtained for subscale solid propellant motors with aluminum loadings of 2, 10 and 15%. Analytical predictions were made using a fully coupled two-phase numerical solution. Data comparisons will be presented for radial distributions at plume axial stations of 5, 12, 16 and 20 diameters.
An analytical probabilistic model of the quality efficiency of a sewer tank
NASA Astrophysics Data System (ADS)
Balistrocchi, Matteo; Grossi, Giovanna; Bacchi, Baldassare
2009-12-01
The assessment of the efficiency of a storm water storage facility devoted to the sewer overflow control in urban areas strictly depends on the ability to model the main features of the rainfall-runoff routing process and the related wet weather pollution delivery. In this paper the possibility of applying the analytical probabilistic approach for developing a tank design method, whose potentials are similar to the continuous simulations, is proved. In the model derivation the quality issues of such devices were implemented. The formulation is based on a Weibull probabilistic model of the main characteristics of the rainfall process and on a power law describing the relationship between the dimensionless storm water cumulative runoff volume and the dimensionless cumulative pollutograph. Following this approach, efficiency indexes were established. The proposed model was verified by comparing its results to those obtained by continuous simulations; satisfactory agreement is shown for the proposed efficiency indexes.
ERIC Educational Resources Information Center
Bock, H. Darrell
The hardware and software system used to create the National Opinion Research Center/Center for Research on Evaluation, Standards, and Student Testing (NORC/CRESST) item databases and test booklets for the 12th-grade science assessment are described. A general description of the capabilities of the system is given, with some specific information…
Analytical Plan for Roman Glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.
Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University ofmore » Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.« less
Modeling Demand-Responsive Feeder Systems in the UTPS Framework
DOT National Transportation Integrated Search
1978-07-01
For the transit planner considering alternative future transit designs, there has been little in the way of analytical tools available to assess the impact of demand-responsive transportation (DRT) systems. The intent of this report is to provide the...
Foreign body impact event damage formation in composite structures
NASA Technical Reports Server (NTRS)
Bucinell, Ronald B.
1994-01-01
This report discusses a methodology that can be used to assess the effect of foreign body impacts on composite structural integrity. The described effort focuses on modeling the effect of a central impact on a 5 3/4 inch filament wound test article. The discussion will commence with details of the material modeling that was used to establish the input properties for the analytical model. This discussion is followed by an overview of the impact assessment methodology. The progress on this effort to date is reviewed along with a discussion of tasks that have yet to be completed.
Patient or physician preferences for decision analysis: the prenatal genetic testing decision.
Heckerling, P S; Verp, M S; Albert, N
1999-01-01
The choice between amniocentesis and chorionic villus sampling for prenatal genetic testing involves tradeoffs of the benefits and risks of the tests. Decision analysis is a method of explicitly weighing such tradeoffs. The authors examined the relationship between prenatal test choices made by patients and the choices prescribed by decision-analytic models based on their preferences, and separate models based on the preferences of their physicians. Preferences were assessed using written scenarios describing prenatal testing outcomes, and were recorded on linear rating scales. After adjustment for sociodemographic and obstetric confounders, test choice was significantly associated with the choice of decision models based on patient preferences (odds ratio 4.44; Cl, 2.53 to 7.78), but not with the choice of models based on the preferences of the physicians (odds ratio 1.60; Cl, 0.79 to 3.26). Agreement between decision analyses based on patient preferences and on physician preferences was little better than chance (kappa = 0.085+/-0.063). These results were robust both to changes in the decision-analytic probabilities and to changes in the model structure itself to simulate non-expected utility decision rules. The authors conclude that patient but not physician preferences, incorporated in decision models, correspond to the choice of amniocentesis or chorionic villus sampling made by the patient. Nevertheless, because patient preferences were assessed after referral for genetic testing, prospective preference-assessment studies will be necessary to confirm this association.
Modeling and Analysis of Large Amplitude Flight Maneuvers
NASA Technical Reports Server (NTRS)
Anderson, Mark R.
2004-01-01
Analytical methods for stability analysis of large amplitude aircraft motion have been slow to develop because many nonlinear system stability assessment methods are restricted to a state-space dimension of less than three. The proffered approach is to create regional cell-to-cell maps for strategically located two-dimensional subspaces within the higher-dimensional model statespace. These regional solutions capture nonlinear behavior better than linearized point solutions. They also avoid the computational difficulties that emerge when attempting to create a cell map for the entire state-space. Example stability results are presented for a general aviation aircraft and a micro-aerial vehicle configuration. The analytical results are consistent with characteristics that were discovered during previous flight-testing.
Postbuckling and Growth of Delaminations in Composite Plates Subjected to Axial Compression
NASA Technical Reports Server (NTRS)
Reeder, James R.; Chunchu, Prasad B.; Song, Kyongchan; Ambur, Damodar R.
2002-01-01
The postbuckling response and growth of circular delaminations in flat and curved plates are investigated as part of a study to identify the criticality of delamination locations through the laminate thickness. The experimental results from tests on delaminated plates are compared with finite element analysis results generated using shell models. The analytical prediction of delamination growth is obtained by assessing the strain energy release rate results from the finite element model and comparing them to a mixed-mode fracture toughness failure criterion. The analytical results for onset of delamination growth compare well with experimental results generated using a 3-dimensional displacement visualization system. The record of delamination progression measured in this study has resulted in a fully 3-dimensional test case with which progressive failure models can be validated.
NASA Astrophysics Data System (ADS)
Zhang, Shulei; Yang, Yuting; McVicar, Tim R.; Yang, Dawen
2018-01-01
Vegetation change is a critical factor that profoundly affects the terrestrial water cycle. Here we derive an analytical solution for the impact of vegetation changes on hydrological partitioning within the Budyko framework. This is achieved by deriving an analytical expression between leaf area index (LAI) change and the Budyko land surface parameter (n) change, through the combination of a steady state ecohydrological model with an analytical carbon cost-benefit model for plant rooting depth. Using China where vegetation coverage has experienced dramatic changes over the past two decades as a study case, we quantify the impact of LAI changes on the hydrological partitioning during 1982-2010 and predict the future influence of these changes for the 21st century using climate model projections. Results show that LAI change exhibits an increasing importance on altering hydrological partitioning as climate becomes drier. In semiarid and arid China, increased LAI has led to substantial streamflow reductions over the past three decades (on average -8.5% in 1990s and -11.7% in 2000s compared to the 1980s baseline), and this decreasing trend in streamflow is projected to continue toward the end of this century due to predicted LAI increases. Our result calls for caution regarding the large-scale revegetation activities currently being implemented in arid and semiarid China, which may result in serious future water scarcity issues here. The analytical model developed here is physically based and suitable for simultaneously assessing both vegetation changes and climate change induced changes to streamflow globally.
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
Coates, James; Jeyaseelan, Asha K; Ybarra, Norma; David, Marc; Faria, Sergio; Souhami, Luis; Cury, Fabio; Duclos, Marie; El Naqa, Issam
2015-04-01
We explore analytical and data-driven approaches to investigate the integration of genetic variations (single nucleotide polymorphisms [SNPs] and copy number variations [CNVs]) with dosimetric and clinical variables in modeling radiation-induced rectal bleeding (RB) and erectile dysfunction (ED) in prostate cancer patients. Sixty-two patients who underwent curative hypofractionated radiotherapy (66 Gy in 22 fractions) between 2002 and 2010 were retrospectively genotyped for CNV and SNP rs5489 in the xrcc1 DNA repair gene. Fifty-four patients had full dosimetric profiles. Two parallel modeling approaches were compared to assess the risk of severe RB (Grade⩾3) and ED (Grade⩾1); Maximum likelihood estimated generalized Lyman-Kutcher-Burman (LKB) and logistic regression. Statistical resampling based on cross-validation was used to evaluate model predictive power and generalizability to unseen data. Integration of biological variables xrcc1 CNV and SNP improved the fit of the RB and ED analytical and data-driven models. Cross-validation of the generalized LKB models yielded increases in classification performance of 27.4% for RB and 14.6% for ED when xrcc1 CNV and SNP were included, respectively. Biological variables added to logistic regression modeling improved classification performance over standard dosimetric models by 33.5% for RB and 21.2% for ED models. As a proof-of-concept, we demonstrated that the combination of genetic and dosimetric variables can provide significant improvement in NTCP prediction using analytical and data-driven approaches. The improvement in prediction performance was more pronounced in the data driven approaches. Moreover, we have shown that CNVs, in addition to SNPs, may be useful structural genetic variants in predicting radiation toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lam, N.; Qiu, H.-I.; Quattrochi, Dale A.; Zhao, Wei
1997-01-01
With the rapid increase in spatial data, especially in the NASA-EOS (Earth Observing System) era, it is necessary to develop efficient and innovative tools to handle and analyze these data so that environmental conditions can be assessed and monitored. A main difficulty facing geographers and environmental scientists in environmental assessment and measurement is that spatial analytical tools are not easily accessible. We have recently developed a remote sensing/GIS software module called Image Characterization and Modeling System (ICAMS) to provide specialized spatial analytical tools for the measurement and characterization of satellite and other forms of spatial data. ICAMS runs on both the Intergraph-MGE and Arc/info UNIX and Windows-NT platforms. The main techniques in ICAMS include fractal measurement methods, variogram analysis, spatial autocorrelation statistics, textural measures, aggregation techniques, normalized difference vegetation index (NDVI), and delineation of land/water and vegetated/non-vegetated boundaries. In this paper, we demonstrate the main applications of ICAMS on the Intergraph-MGE platform using Landsat Thematic Mapper images from the city of Lake Charles, Louisiana. While the utilities of ICAMS' spatial measurement methods (e.g., fractal indices) in assessing environmental conditions remain to be researched, making the software available to a wider scientific community can permit the techniques in ICAMS to be evaluated and used for a diversity of applications. The findings from these various studies should lead to improved algorithms and more reliable models for environmental assessment and monitoring.
De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis
2017-06-01
Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Norman, Stephanie A; Beckett, Laurel A; Miller, Woutrina A; St Leger, Judy; Hobbs, Roderick C
2013-06-01
Blood analytes are critical for evaluating the general health of cetacean populations, so it is important to understand the intrinsic variability of hematology and serum chemistry values. Previous studies have reported data for follow-up periods of several years in managed and wild populations, but studies over long periods of time (> 20 yr) have not been reported. The study objective was to identify the influences of partitioning characteristics on hematology and serum chemistry analytes of apparently healthy managed beluga (Delphinapterus leucas). Blood values from 31 managed belugas, at three facilities, collected over 22 yr, were assessed for seasonal variation and aging trends, and evaluated for biologic variation among and within individuals. Linear mixed effects models assessed the relationship between the analytes and sex, age, season, facility location, ambient air temperature, and photoperiod. Sex differences in analytes and associations with increasing age were observed. Seasonal variation was observed for hemoglobin, hematocrit, mean corpuscular volume, monocytes, alkaline phosphatase, total bilirubin, cholesterol, and triglycerides. Facilities were associated with larger effects on analyte values compared to other covariates, whereas age, sex, and ambient temperature had smaller effects compared to facility and season. Present findings provide important baseline information for future health monitoring efforts. Interpretation of blood analytes and animal health in managed and wild populations over time is aided by having available typical levels for the species and reference intervals for the degree to which individual animals vary from the species average and from their own baseline levels during long-term monitoring.
DOT National Transportation Integrated Search
2017-08-30
Transit oriented development (TOD) has emerged in recent years as a promising paradigm to promote public transportation, increase active transportation usage, mitigate congestion, and alleviate air pollution. However, there is a lack of analytic stud...
Structural Uncertainties in Numerical Induction Models
2006-07-01
divide and conquer” modelling approach. Analytical inputs are then assessments, quantitative or qualitative, of the value, performance, or some...said to be naïve because it relies heavily on the inductive method itself. Sophisticated Induction (Logical Positivism ) This form of induction...falters. Popper’s Falsification Karl Popper around 1959 introduced a variant to the above Logical Positivism , known as the inductive-hypothetico
A geometric ultraviolet-B radiation transfer model applied to vegetation canopies
Wei Gao; Richard H. Grant; Gordon M. Heisler; James R. Slusser
2002-01-01
The decrease in stratospheric ozone (O3) has prompted continued efforts to assess the potential damage to plant and animal life due to enhanced levels of solar ultraviolet (UV)-B (280-320 nm) radiation. The objective of this study was to develop and evaluate an analytical model to simulate the UV-B irradiance loading on horizontal below- canopy...
Kelly, Colleen K; Bowler, Michael; Breden, Felix
2006-01-01
The potential effects of ‘escape’ of genetically modified material (transgenes) into natural communities is a major concern in their use. These effects may be limited in the first instance by limiting the proportion of transgene-carrying plants in the natural community. We previously presented an analytical model of the ecological processes governing the relative abundance and persistence of insect resistance (IR) transgenes in a natural community. In that paper, we illustrated the case in which the transgene is input into the community in a single season using data from oilseed rape (OSR) and its known herbivore, Plutella macropennis. We found that the transgene is unlikely to have a great impact on the natural community. Here, we extend the model for repeated input of crop pollen carrying the transgene. We show the model output, again using OSR, for continuous input of the transgene over 10 years, the projected commercial lifetime of a transgene without associated undesirable agronomic effects. Our results do not change our original conclusion that the IR transgene need not have a large impact on the natural community and our suggestions for assessing and mitigating any threat still stand. PMID:17148386
Methodology, status and plans for development and assessment of Cathare code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.; Faydide, B.
1997-07-01
This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less
Assessment of the ecological impacts of macroroughness elements in stream flows
NASA Astrophysics Data System (ADS)
Niayifar, Amin; Oldroyd, Holly J.; Perona, Paolo
2017-04-01
The environmental suitability of flow release rules is often assessed for different fish species by modeling (e.g., CASiMir and PHABSIM) Weighted Usable Area (WUA) curves. However, these models are not able to resolve the hydrodynamic at small scales, e.g. that induced by the presence of macroroughness (e.g., single stones), which yet determine relatively large wakes that may contribute significantly in terms of habitat suitability. The presence of stones generates sheltered zones (i.e., the wake), which are typically temporary stationary points for many fish species. By resting in these low velocity regions, fishes minimize energy expenditure, and can quickly move to nearby fast water to feed (Hayes and Jowett, 1994). Following the analytical model proposed by Negretti et al., (2006), we developed an analytical solution for the wake area behind the macroroughness elements. The total wake area in the river reach being monitored is a function of the streamflow, Q, and it is an actual Usable Area for fishes that can be used to correct the one computed by classic software such as PHABSIM or CASIMIR at each flow rate. By quantifying these wake areas we can therefore assess how the physical properties and number of such zones change in response to the changing hydrologic regime. In order to validate the concept, we selected a 400 meter reach from the Aare river in the center of Switzerland. The statistical distribution of macroroughness elements is obtained by taking orthorectified aerial photographs by drone surveys during low flow conditions. Then, the distribution of the wakes is obtained analytically as a derived distribution. This methodology allows to save computational costs and the time for detailed field surveys.
NASA Astrophysics Data System (ADS)
Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.
2014-09-01
Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.
Roy, Rajarshi; Desai, Jaydev P.
2016-01-01
This paper outlines a comprehensive parametric approach for quantifying mechanical properties of spatially heterogeneous thin biological specimens such as human breast tissue using contact-mode Atomic Force Microscopy. Using inverse finite element (FE) analysis of spherical nanoindentation, the force response from hyperelastic material models is compared with the predicted force response from existing analytical contact models, and a sensitivity study is carried out to assess uniqueness of the inverse FE solution. Furthermore, an automation strategy is proposed to analyze AFM force curves with varying levels of material nonlinearity with minimal user intervention. Implementation of our approach on an elastic map acquired from raster AFM indentation of breast tissue specimens indicates that a judicious combination of analytical and numerical techniques allow more accurate interpretation of AFM indentation data compared to relying on purely analytical contact models, while keeping the computational cost associated an inverse FE solution with reasonable limits. The results reported in this study have several implications in performing unsupervised data analysis on AFM indentation measurements on a wide variety of heterogeneous biomaterials. PMID:25015130
Simple Analytic Model for Nanowire Array Polarizers
NASA Astrophysics Data System (ADS)
Pelletier, Vincent; Asakawa, Koji; Wu, Mingshaw; Register, Richard; Chaikin, Paul
2006-03-01
Cylinder-forming diblock copolymers can be used to pattern nanowire arrays on a transparent substrate to be used as a polarizer, as described by Koji Asakawa in a complementary talk at this meeting. With a 33nm period, these wire arrays can polarize UV radiation, which is of great interest in lithography, astronomy and other areas. One can gain an analytical understanding of such an array made of non-perfectly conducting material of finite thickness using a model with an appropriately averaged complex dielectric function in an infinite wavelength approximation. This analysis predicts that the grid can go from an E-polarizer to an H-polarizer as the wavelength decreases below a cross-over wavelength, and experimental data confirm this cross-over. The overall response of the polarizing grid depends primarily on the plasma frequency of the metal used and the volume fraction of the nanowires, as well as the grid thickness. A numerical approach is also used to confirm the analytical model and assess departure from infinite wavelength effects. For our array dimensions, the polarization is only slightly different from this approximation for wavelengths down to 150nm.
Decision analytic models for Alzheimer's disease: state of the art and future directions.
Cohen, Joshua T; Neumann, Peter J
2008-05-01
Decision analytic policy models for Alzheimer's disease (AD) enable researchers and policy makers to investigate questions about the costs and benefits of a wide range of existing and potential screening, testing, and treatment strategies. Such models permit analysts to compare existing alternatives, explore hypothetical scenarios, and test the strength of underlying assumptions in an explicit, quantitative, and systematic way. Decision analytic models can best be viewed as complementing clinical trials both by filling knowledge gaps not readily addressed by empirical research and by extrapolating beyond the surrogate markers recorded in a trial. We identified and critiqued 13 distinct AD decision analytic policy models published since 1997. Although existing models provide useful insights, they also have a variety of limitations. (1) They generally characterize disease progression in terms of cognitive function and do not account for other distinguishing features, such as behavioral symptoms, functional performance, and the emotional well-being of AD patients and caregivers. (2) Many describe disease progression in terms of a limited number of discrete states, thus constraining the level of detail that can be used to characterize both changes in patient status and the relationships between disease progression and other factors, such as residential status, that influence outcomes of interest. (3) They have focused almost exclusively on evaluating drug treatments, thus neglecting other disease management strategies and combinations of pharmacologic and nonpharmacologic interventions. Future AD models should facilitate more realistic and compelling evaluations of various interventions to address the disease. An improved model will allow decision makers to better characterize the disease, to better assess the costs and benefits of a wide range of potential interventions, and to better evaluate the incremental costs and benefits of specific interventions used in conjunction with other disease management strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barani, T.; Bruschi, E.; Pizzocri, D.
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
Stajkovic, Alexander D; Lee, Dongseop; Nyberg, Anthony J
2009-05-01
The authors examined relationships among collective efficacy, group potency, and group performance. Meta-analytic results (based on 6,128 groups, 31,019 individuals, 118 correlations adjusted for dependence, and 96 studies) reveal that collective efficacy was significantly related to group performance (.35). In the proposed nested 2-level model, collective efficacy assessment (aggregation and group discussion) was tested as the 1st-level moderator. It showed significantly different average correlations with group performance (.32 vs. .45), but the group discussion assessment was homogeneous, whereas the aggregation assessment was heterogeneous. Consequently, there was no 2nd-level moderation for the group discussion, and heterogeneity in the aggregation group was accounted for by the 2nd-level moderator, task interdependence (high, moderate, and low levels were significant; the higher the level, the stronger the relationship). The 2nd and 3rd meta-analyses indicated that group potency was related to group performance (.29) and to collective efficacy (.65). When tested in a structural equation modeling analysis based on meta-analytic findings, collective efficacy fully mediated the relationship between group potency and group performance. The authors suggest future research and convert their findings to a probability of success index to help facilitate practice. (c) 2009 APA, all rights reserved.
Analytical assessment of some characteristic ratios for s-wave superconductors
NASA Astrophysics Data System (ADS)
Gonczarek, Ryszard; Krzyzosiak, Mateusz; Gonczarek, Adam; Jacak, Lucjan
2018-04-01
We evaluate some thermodynamic quantities and characteristic ratios that describe low- and high-temperature s-wave superconducting systems. Based on a set of fundamental equations derived within the conformal transformation method, a simple model is proposed and studied analytically. After including a one-parameter class of fluctuations in the density of states, the mathematical structure of the s-wave superconducting gap, the free energy difference, and the specific heat difference is found and discussed in an analytic manner. Both the zero-temperature limit T = 0 and the subcritical temperature range T ≲ T c are discussed using the method of successive approximations. The equation for the ratio R 1, relating the zero-temperature energy gap and the critical temperature, is formulated and solved numerically for various values of the model parameter. Other thermodynamic quantities are analyzed, including a characteristic ratio R 2, quantifying the dynamics of the specific heat jump at the critical temperature. It is shown that the obtained model results coincide with experimental data for low- T c superconductors. The prospect of application of the presented model in studies of high- T c superconductors and other superconducting systems of the new generation is also discussed.
NASA Technical Reports Server (NTRS)
Little, B. H., Jr.; Tomlin, K. H.; Aljabri, A. S.; Mason, C. A.
1988-01-01
One-ninth scale wind tunnel model tests of the Propfan Test Assessment (PTA) aircraft were performed in three different NASA facilities. Wing and propfan nacelle static pressures, model forces and moments, and flow field at the propfan plane were measured in these tests. Tests started in June 1985 and were completed in January 1987. These data were needed to assure PTA safety of flight, predict PTA performance, and validate analytical codes that will be used to predict flow fields in which the propfan will operate.
Studzinski, J
2017-06-01
The Digital Imaging Adoption Model (DIAM) has been jointly developed by HIMSS Analytics and the European Society of Radiology (ESR). It helps evaluate the maturity of IT-supported processes in medical imaging, particularly in radiology. This eight-stage maturity model drives your organisational, strategic and tactical alignment towards imaging-IT planning. The key audience for the model comprises hospitals with imaging centers, as well as external imaging centers that collaborate with hospitals. The assessment focuses on different dimensions relevant to digital imaging, such as software infrastructure and usage, workflow security, clinical documentation and decision support, data exchange and analytical capabilities. With its standardised approach, it enables regional, national and international benchmarking. All DIAM participants receive a structured report that can be used as a basis for presenting, e.g. budget planning and investment decisions at management level.
Reliability and maintainability assessment factors for reliable fault-tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1984-01-01
A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.
Kipper, Karin; Barker, Charlotte I S; Standing, Joseph F; Sharland, Mike; Johnston, Atholl
2018-01-01
Penicillins are widely used to treat infections in children; however, the evidence is continuing to evolve in defining the optimal dosing. Modern pediatric pharmacokinetic study protocols frequently favor opportunistic, "scavenged" sampling. This study aimed to develop a small-volume single assay for five major penicillins and to assess the influence of sample degradation on inferences made using pharmacokinetic modeling, to investigate the suitability of scavenged sampling strategies. Using a rapid ultrahigh-performance liquid chromatographic-tandem mass spectrometric method, an assay for five penicillins (amoxicillin, ampicillin, benzylpenicillin, piperacillin, and flucloxacillin) in blood plasma was developed and validated. Penicillin stabilities were evaluated under different conditions. Using these data, the impact of drug degradation on inferences made during pharmacokinetic modeling was evaluated. All evaluated penicillins indicated good stability at room temperature (23 ± 2°C) over 1 h, remaining in the range of 98 to 103% of the original concentration. More-rapid analyte degradation had already occurred after 4 h, with stability ranging from 68% to 99%. Stability over longer periods declined: degradation of up to 60% was observed with delayed sample processing of up to 24 h. Modeling showed that analyte degradation can lead to a 30% and 28% bias in clearance and volume of distribution, respectively, and falsely show nonlinearity in clearance. Five common penicillins can now be measured in a single low-volume blood sample. Beta-lactam chemical instability in plasma can cause misleading pharmacokinetic modeling results, which could impact upon model-based dosing recommendations and the forthcoming era of beta-lactam therapeutic drug monitoring. Copyright © 2017 American Society for Microbiology.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
Assurance of Learning in the MIS Program
ERIC Educational Resources Information Center
Harper, Jeffrey S.; Harder, Joseph T.
2009-01-01
This article describes the development of a systematic and practical methodology for assessing program effectiveness and monitoring student development in undergraduate decision sciences programs. The model we present is based on a student's progression through learning stages associated with four key competencies: technical, analytical,…
DOT National Transportation Integrated Search
2012-11-01
The effects of ASR/DEF on the D-regions of structures are investigated by means of a dual experimental and : analytical modeling program. Four near full scale specimens that represent cantilever and straddle pier bents, : that are representative of t...
Modeling and evaluating user behavior in exploratory visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.
Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling andmore » evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.« less
The Analysis of Adhesively Bonded Advanced Composite Joints Using Joint Finite Elements
NASA Technical Reports Server (NTRS)
Stapleton, Scott E.; Waas, Anthony M.
2012-01-01
The design and sizing of adhesively bonded joints has always been a major bottleneck in the design of composite vehicles. Dense finite element (FE) meshes are required to capture the full behavior of a joint numerically, but these dense meshes are impractical in vehicle-scale models where a course mesh is more desirable to make quick assessments and comparisons of different joint geometries. Analytical models are often helpful in sizing, but difficulties arise in coupling these models with full-vehicle FE models. Therefore, a joint FE was created which can be used within structural FE models to make quick assessments of bonded composite joints. The shape functions of the joint FE were found by solving the governing equations for a structural model for a joint. By analytically determining the shape functions of the joint FE, the complex joint behavior can be captured with very few elements. This joint FE was modified and used to consider adhesives with functionally graded material properties to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. Furthermore, proof-of-concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint. Furthermore, the capability to model non-linear adhesive constitutive behavior with large rotations was developed, and progressive failure of the adhesive was modeled by re-meshing the joint as the adhesive fails. Results predicted using the joint FE was compared with experimental results for various joint configurations, including double cantilever beam and single lap joints.
Olariu, Elena; Cadwell, Kevin K; Hancock, Elizabeth; Trueman, David; Chevrou-Severac, Helene
2017-01-01
Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA) websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK). All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost-effectiveness analysis in the decision-making processes of HTA bodies and other medical decision-makers, there is a need for additional guidance to inform a more consistent approach to decision-analytic modeling. Further research should be done to develop more detailed guidelines on the estimation of transition probabilities.
Using a dyadic logistic multilevel model to analyze couple data.
Preciado, Mariana A; Krull, Jennifer L; Hicks, Andrew; Gipson, Jessica D
2016-02-01
There is growing recognition within the sexual and reproductive health field of the importance of incorporating both partners' perspectives when examining sexual and reproductive health behaviors. Yet, the analytical approaches to address couple data have not been readily integrated and utilized within the demographic and public health literature. This paper seeks to provide readers unfamiliar with analytical approaches to couple data an applied example of the use of dyadic logistic multilevel modeling, a useful approach to analyzing couple data to assess the individual, partner and couple characteristics that are related to individuals' reproductively relevant beliefs, attitudes and behaviors. The use of multilevel models in reproductive health research can help researchers develop a more comprehensive picture of the way in which individuals' reproductive health outcomes are situated in a larger relationship and cultural context. Copyright © 2016 Elsevier Inc. All rights reserved.
A model to inform management actions as a response to chytridiomycosis-associated decline
Converse, Sarah J.; Bailey, Larissa L.; Mosher, Brittany A.; Funk, W. Chris; Gerber, Brian D.; Muths, Erin L.
2017-01-01
Decision-analytic models provide forecasts of how systems of interest will respond to management. These models can be parameterized using empirical data, but sometimes require information elicited from experts. When evaluating the effects of disease in species translocation programs, expert judgment is likely to play a role because complete empirical information will rarely be available. We illustrate development of a decision-analytic model built to inform decision-making regarding translocations and other management actions for the boreal toad (Anaxyrus boreas boreas), a species with declines linked to chytridiomycosis caused by Batrachochytrium dendrobatidis (Bd). Using the model, we explored the management implications of major uncertainties in this system, including whether there is a genetic basis for resistance to pathogenic infection by Bd, how translocation can best be implemented, and the effectiveness of efforts to reduce the spread of Bd. Our modeling exercise suggested that while selection for resistance to pathogenic infectionDecision-analytic models provide forecasts of how systems of interest will respond to management. These models can be parameterized using empirical data, but sometimes require information elicited from experts. When evaluating the effects of disease in species translocation programs, expert judgment is likely to play a role because complete empirical information will rarely be available. We illustrate development of a decision-analytic model built to inform decision-making regarding translocations and other management actions for the boreal toad (Anaxyrus boreas boreas), a species with declines linked to chytridiomycosis caused by Batrachochytrium dendrobatidis (Bd). Using the model, we explored the management implications of major uncertainties in this system, including whether there is a genetic basis for resistance to pathogenic infection by Bd, how translocation can best be implemented, and the effectiveness of efforts to reduce the spread of Bd. Our modeling exercise suggested that while selection for resistance to pathogenic infection by Bd could increase numbers of sites occupied by toads, and translocations could increase the rate of toad recovery, efforts to reduce the spread of Bd may have little effect. We emphasize the need to continue developing and parameterizing models necessary to assess management actions for combating chytridiomycosis-associated declines. by Bd could increase numbers of sites occupied by toads, and translocations could increase the rate of toad recovery, efforts to reduce the spread of Bd may have little effect. We emphasize the need to continue developing and parameterizing models necessary to assess management actions for combating chytridiomycosis-associated declines.
Raman spectroscopy for the analytical quality control of low-dose break-scored tablets.
Gómez, Diego A; Coello, Jordi; Maspoch, Santiago
2016-05-30
Quality control of solid dosage forms involves the analysis of end products according to well-defined criteria, including the assessment of the uniformity of dosage units (UDU). However, in the case of break-scored tablets, given that tablet splitting is widespread as a means to adjust doses, the uniform distribution of the active pharmaceutical ingredient (API) in all the possible fractions of the tablet must be assessed. A general procedure to accomplish with both issues, using Raman spectroscopy, is presented. It is based on the acquisition of a collection of spectra in different regions of the tablet, that later can be selected to determine the amount of API in the potential fractions that can result after splitting. The procedure has been applied to two commercial products, Sintrom 1 and Sintrom 4, with API (acenocoumarol) mass proportion of 2% and 0.7% respectively. Partial Least Squares (PLS) calibration models were constructed for the quantification of acenocoumarol in whole tablets using HPLC as a reference analytical method. Once validated, the calibration models were used to determine the API content in the different potential fragments of the scored Sintrom 4 tablets. Fragment mass measurements were also performed to estimate the range of masses of the halves and quarters that could result after tablet splitting. The results show that Raman spectroscopy can be an alternative analytical procedure to assess the uniformity of content, both in whole tablets as in its potential fragments, and that Sintrom 4 tablets can be perfectly split in halves, but some cautions have to be taken when considering the fragmentation in quarters. A practical alternative to the use of UDU test for the assessment of tablet fragments is proposed. Copyright © 2016 Elsevier B.V. All rights reserved.
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sen, Osman Taha; Dreyer, Jason T.; Singh, Rajendra
2014-12-01
In this article, a feasibility study of controlling the low frequency torque response of a disc brake system with modulated actuation pressure (in the open loop mode) is conducted. First, a quasi-linear model of the torsional system is introduced, and analytical solutions are proposed to incorporate the modulation effect. Tractable expressions for three different modulation schemes are obtained, and conditions that would lead to a reduction in the oscillatory amplitudes are identified. Second, these conditions are evaluated with a numerical model of the torsional system with clearance nonlinearity, and analytical solutions are verified in terms of the trends observed. Finally, a laboratory experiment with a solenoid valve is built to modulate actuation pressure with a constant duty cycle, and time-frequency domain data are acquired. Measurements are utilized to assess analytical observations, and all methods show that the speed-dependent brake torque amplitudes can be altered with an appropriate modulation of actuation pressure.
Torsional vibration of a cracked rod by variational formulation and numerical analysis
NASA Astrophysics Data System (ADS)
Chondros, T. G.; Labeas, G. N.
2007-04-01
The torsional vibration of a circumferentially cracked cylindrical shaft is studied through an "exact" analytical solution and a numerical finite element (FE) analysis. The Hu-Washizu-Barr variational formulation is used to develop the differential equation and the boundary conditions of the cracked rod. The equations of motion for a uniform cracked rod in torsional vibration are derived and solved, and the Rayleigh quotient is used to further approximate the natural frequencies of the cracked rod. Results for the problem of the torsional vibration of a cylindrical shaft with a peripheral crack are provided through an analytical solution based on variational formulation to derive the equation of motion and a numerical analysis utilizing a parametric three-dimensional (3D) solid FE model of the cracked rod. The crack is modelled as a continuous flexibility based on fracture mechanics principles. The variational formulation results are compared with the FE alternative. The sensitivity of the FE discretization with respect to the analytical results is assessed.
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Molin, S.
2012-02-01
We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.
A non-grey analytical model for irradiated atmospheres. II. Analytical vs. numerical solutions
NASA Astrophysics Data System (ADS)
Parmentier, Vivien; Guillot, Tristan; Fortney, Jonathan J.; Marley, Mark S.
2015-02-01
Context. The recent discovery and characterization of the diversity of the atmospheres of exoplanets and brown dwarfs calls for the development of fast and accurate analytical models. Aims: We wish to assess the goodness of the different approximations used to solve the radiative transfer problem in irradiated atmospheres analytically, and we aim to provide a useful tool for a fast computation of analytical temperature profiles that remains correct over a wide range of atmospheric characteristics. Methods: We quantify the accuracy of the analytical solution derived in paper I for an irradiated, non-grey atmosphere by comparing it to a state-of-the-art radiative transfer model. Then, using a grid of numerical models, we calibrate the different coefficients of our analytical model for irradiated solar-composition atmospheres of giant exoplanets and brown dwarfs. Results: We show that the so-called Eddington approximation used to solve the angular dependency of the radiation field leads to relative errors of up to ~5% on the temperature profile. For grey or semi-grey atmospheres (i.e., when the visible and thermal opacities, respectively, can be considered independent of wavelength), we show that the presence of a convective zone has a limited effect on the radiative atmosphere above it and leads to modifications of the radiative temperature profile of approximately ~2%. However, for realistic non-grey planetary atmospheres, the presence of a convective zone that extends to optical depths smaller than unity can lead to changes in the radiative temperature profile on the order of 20% or more. When the convective zone is located at deeper levels (such as for strongly irradiated hot Jupiters), its effect on the radiative atmosphere is again on the same order (~2%) as in the semi-grey case. We show that the temperature inversion induced by a strong absorber in the optical, such as TiO or VO is mainly due to non-grey thermal effects reducing the ability of the upper atmosphere to cool down rather than an enhanced absorption of the stellar light as previously thought. Finally, we provide a functional form for the coefficients of our analytical model for solar-composition giant exoplanets and brown dwarfs. This leads to fully analytical pressure-temperature profiles for irradiated atmospheres with a relative accuracy better than 10% for gravities between 2.5 m s-2 and 250 m s-2 and effective temperatures between 100 K and 3000 K. This is a great improvement over the commonly used Eddington boundary condition. A FORTRAN implementation of the analytical model is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A35 or at http://www.oca.eu/parmentier/nongrey.Appendix A is available in electronic form at http://www.aanda.org
Aziza, Fanny; Mettler, Eric; Daudin, Jean-Jacques; Sanaa, Moez
2006-06-01
Cheese smearing is a complex process and the potential for cross-contamination with pathogenic or undesirable microorganisms is critical. During ripening, cheeses are salted and washed with brine to develop flavor and remove molds that could develop on the surfaces. Considering the potential for cross-contamination of this process in quantitative risk assessments could contribute to a better understanding of this phenomenon and, eventually, improve its control. The purpose of this article is to model the cross-contamination of smear-ripened cheeses due to the smearing operation under industrial conditions. A compartmental, dynamic, and stochastic model is proposed for mechanical brush smearing. This model has been developed to describe the exchange of microorganisms between compartments. Based on the analytical solution of the model equations and on experimental data collected with an industrial smearing machine, we assessed the values of the transfer parameters of the model. Monte Carlo simulations, using the distributions of transfer parameters, provide the final number of contaminated products in a batch and their final level of contamination for a given scenario taking into account the initial number of contaminated cheeses of the batch and their contaminant load. Based on analytical results, the model provides indicators for smearing efficiency and propensity of the process for cross-contamination. Unlike traditional approaches in mechanistic models, our approach captures the variability and uncertainty inherent in the process and the experimental data. More generally, this model could represent a generic base to use in modeling similar processes prone to cross-contamination.
Curtis A. Collins; David L. Evans; Keith L. Belli; Patrick A. Glass
2010-01-01
Hurricane Katrinaâs passage through south Mississippi on August 29, 2005, which damaged or destroyed thousands of hectares of forest land, was followed by massive salvage, cleanup, and assessment efforts. An initial assessment by the Mississippi Forestry Commission estimated that over $1 billion in raw wood material was downed by the storm, with county-level damage...
Probabilistic seismic vulnerability and risk assessment of stone masonry structures
NASA Astrophysics Data System (ADS)
Abo El Ezz, Ahmad
Earthquakes represent major natural hazards that regularly impact the built environment in seismic prone areas worldwide and cause considerable social and economic losses. The high losses incurred following the past destructive earthquakes promoted the need for assessment of the seismic vulnerability and risk of the existing buildings. Many historic buildings in the old urban centers in Eastern Canada such as Old Quebec City are built of stone masonry and represent un-measurable architectural and cultural heritage. These buildings were built to resist gravity loads only and generally offer poor resistance to lateral seismic loads. Seismic vulnerability assessment of stone masonry buildings is therefore the first necessary step in developing seismic retrofitting and pre-disaster mitigation plans. The objective of this study is to develop a set of probability-based analytical tools for efficient seismic vulnerability and uncertainty analysis of stone masonry buildings. A simplified probabilistic analytical methodology for vulnerability modelling of stone masonry building with systematic treatment of uncertainties throughout the modelling process is developed in the first part of this study. Building capacity curves are developed using a simplified mechanical model. A displacement based procedure is used to develop damage state fragility functions in terms of spectral displacement response based on drift thresholds of stone masonry walls. A simplified probabilistic seismic demand analysis is proposed to capture the combined uncertainty in capacity and demand on fragility functions. In the second part, a robust analytical procedure for the development of seismic hazard compatible fragility and vulnerability functions is proposed. The results are given by sets of seismic hazard compatible vulnerability functions in terms of structure-independent intensity measure (e.g. spectral acceleration) that can be used for seismic risk analysis. The procedure is very efficient for conducting rapid vulnerability assessment of stone masonry buildings. With modification of input structural parameters, it can be adapted and applied to any other building class. A sensitivity analysis of the seismic vulnerability modelling is conducted to quantify the uncertainties associated with each of the input parameters. The proposed methodology was validated for a scenario-based seismic risk assessment of existing buildings in Old Quebec City. The procedure for hazard compatible vulnerability modelling was used to develop seismic fragility functions in terms of spectral acceleration representative of the inventoried buildings. A total of 1220 buildings were considered. The assessment was performed for a scenario event of magnitude 6.2 at distance 15km with a probability of exceedance of 2% in 50 years. The study showed that most of the expected damage is concentrated in the old brick and stone masonry buildings.
A Criterion-Related Validation Study of the Army Core Leader Competency Model
2007-04-01
2004). Transformational and transactional leadership: A meta-analytic test of their relative validity. Journal of Applied Psychology , 89, 755- 768...performance criteria in an attempt to adjust ratings for this influence. Leader survey materials were developed and pilot tested at Ft. Drum and Ft... psychological constructs in the behavioral science realm. Numerous theories, popular literature, websites, assessments, and competency models are
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Thermal Barrier Coatings. Abstracts and figures
NASA Technical Reports Server (NTRS)
1985-01-01
The Thermal Barrier Coatings Workshop was held May 21 and 22, 1985, at the NASA Lewis Research Center in Cleveland, Ohio. Six sessions covered Failure Mechanisms and Life Modeling, Effects of Oxidation and Creep, Phase Stability and Microstructural Aspects, Nondestructive and Analytical Assessment, Coating Development, and Alternative Applications.
Web portal on environmental sciences "ATMOS''
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Lykosov, V. N.; Fazliev, A. Z.
2006-06-01
The developed under INTAS grant web portal ATMOS (http://atmos.iao.ru and http://atmos.scert.ru) makes available to the international research community, environmental managers, and the interested public, a bilingual information source for the domain of Atmospheric Physics and Chemistry, and the related application domain of air quality assessment and management. It offers access to integrated thematic information, experimental data, analytical tools and models, case studies, and related information and educational resources compiled, structured, and edited by the partners into a coherent and consistent thematic information resource. While offering the usual components of a thematic site such as link collections, user group registration, discussion forum, news section etc., the site is distinguished by its scientific information services and tools: on-line models and analytical tools, and data collections and case studies together with tutorial material. The portal is organized as a set of interrelated scientific sites, which addressed basic branches of Atmospheric Sciences and Climate Modeling as well as the applied domains of Air Quality Assessment and Management, Modeling, and Environmental Impact Assessment. Each scientific site is open for external access information-computational system realized by means of Internet technologies. The main basic science topics are devoted to Atmospheric Chemistry, Atmospheric Spectroscopy and Radiation, Atmospheric Aerosols, Atmospheric Dynamics and Atmospheric Models, including climate models. The portal ATMOS reflects current tendency of Environmental Sciences transformation into exact (quantitative) sciences and is quite effective example of modern Information Technologies and Environmental Sciences integration. It makes the portal both an auxiliary instrument to support interdisciplinary projects of regional environment and extensive educational resource in this important domain.
Performance of a Fuel-Cell-Powered, Small Electric Airplane Assessed
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2004-01-01
Rapidly emerging fuel-cell-power technologies may be used to launch a new revolution of electric propulsion systems for light aircraft. Future small electric airplanes using fuel cell technologies hold the promise of high reliability, low maintenance, low noise, and - with the exception of water vapor - zero emissions. An analytical feasibility and performance assessment was conducted by NASA Glenn Research Center's Airbreathing Systems Analysis Office of a fuel-cell-powered, propeller-driven, small electric airplane based on a model of the MCR-01 two-place kitplane (Dyn'Aero, Darois, France). This assessment was conducted in parallel with an ongoing effort by the Advanced Technology Products Corporation and the Foundation for Advancing Science and Technology Education. Their project - partially funded by a NASA grant - is to design, build, and fly the first manned, continuously propelled, nongliding electric airplane. In our study, an analytical performance model of a proton exchange membrane (PEM) fuel cell propulsion system was developed and applied to a notional, two-place light airplane modeled after the MCR-01 kitplane. The PEM fuel cell stack was fed pure hydrogen fuel and humidified ambient air via a small automotive centrifugal supercharger. The fuel cell performance models were based on chemical reaction analyses calibrated with published data from the fledgling U.S. automotive fuel cell industry. Electric propeller motors, rated at two shaft power levels in separate assessments, were used to directly drive a two-bladed, variable-pitch propeller. Fuel sources considered were compressed hydrogen gas and cryogenic liquid hydrogen. Both of these fuel sources provided pure, contaminant-free hydrogen for the PEM cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; Whitney, Paul D.; White, Amanda M.
2013-07-15
Pacific Northwest National Laboratory has spent several years researching, developing, and validating large Bayesian network models to support integration of open source data sets for nuclear proliferation research. Our current work focuses on generating a set of interrelated models for multi-source assessment of nuclear programs, as opposed to a single comprehensive model. By using this approach, we can break down the models to cover logical sub-problems that can utilize different expertise and data sources. This approach allows researchers to utilize the models individually or in combination to detect and characterize a nuclear program and identify data gaps. The models operatemore » at various levels of granularity, covering a combination of state-level assessments with more detailed models of site or facility characteristics. This paper will describe the current open source-driven, nuclear nonproliferation models under development, the pros and cons of the analytical approach, and areas for additional research.« less
ERIC Educational Resources Information Center
Sawaki, Yasuyo
2007-01-01
This is a construct validation study of a second language speaking assessment that reported a language profile based on analytic rating scales and a composite score. The study addressed three key issues: score dependability, convergent/discriminant validity of analytic rating scales and the weighting of analytic ratings in the composite score.…
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
Evolution of an Implementation-Ready Interprofessional Pain Assessment Reference Model
Collins, Sarah A; Bavuso, Karen; Swenson, Mary; Suchecki, Christine; Mar, Perry; Rocha, Roberto A.
2017-01-01
Standards to increase consistency of comprehensive pain assessments are important for safety, quality, and analytics activities, including meeting Joint Commission requirements and learning the best management strategies and interventions for the current prescription Opioid epidemic. In this study we describe the development and validation of a Pain Assessment Reference Model ready for implementation on EHR forms and flowsheets. Our process resulted in 5 successive revisions of the reference model, which more than doubled the number of data elements to 47. The organization of the model evolved during validation sessions with panels totaling 48 subject matter experts (SMEs) to include 9 sets of data elements, with one set recommended as a minimal data set. The reference model also evolved when implemented into EHR forms and flowsheets, indicating specifications such as cascading logic that are important to inform secondary use of data. PMID:29854125
Highlights of Transient Plume Impingement Model Validation and Applications
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2011-01-01
This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.
2013-06-01
realistically representing the world in a simulation environment. A screenshot of the combat model used for this research is shown below. There are six...changes in use of technology (Ryan & Jons, 1992). Cost effectiveness and operational effectiveness are important, and it is extremely hard to achieve...effectiveness of ships using simulation and analytical models, to create a ship synthesis model, and most importantly, to develop decision making tools
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Sequential extractions can provide analytical constraints on the identification of mineral phases that control arsenic speciation in sediments. Model solids were used in this study to evaluate different solutions designed to extract arsenic from relatively labile solid phases. ...
DOT National Transportation Integrated Search
2016-09-01
This project applies a decision analytic methodology that takes considerations of extreme weather events to quantify and assess canopy investment options. The project collected data for two cases studies in two different transit agencies: Chicago Tra...
Research and management issues in large-scale fire modeling
David L. Peterson; Daniel L. Schmoldt
2000-01-01
In 1996, a team of North American fire scientists and resource managers convened to assess the effects of fire disturbance on ecosystems and to develop scientific recommendations for future fire research and management activities. These recommendations - elicited with the Analytic Hierarchy Process - include numerically ranked scientific and managerial questions and...
NASA Astrophysics Data System (ADS)
Hreniuc, V.; Hreniuc, A.; Pescaru, A.
2017-08-01
Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.
Technosocial Modeling of IED Threat Scenarios and Attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Brothers, Alan J.; Coles, Garill A.
2009-03-23
This paper describes an approach for integrating sociological and technical models to develop more complete threat assessment. Current approaches to analyzing and addressing threats tend to focus on the technical factors. This paper addresses development of predictive models that encompass behavioral as well as these technical factors. Using improvised explosive device (IED) attacks as motivation, this model supports identification of intervention activities 'left of boom' as well as prioritizing attack modalities. We show how Bayes nets integrate social factors associated with IED attacks into general threat model containing technical and organizational steps from planning through obtaining the IED to initiationmore » of the attack. The social models are computationally-based representations of relevant social science literature that describes human decision making and physical factors. When combined with technical models, the resulting model provides improved knowledge integration into threat assessment for monitoring. This paper discusses the construction of IED threat scenarios, integration of diverse factors into an analytical framework for threat assessment, indicator identification for future threats, and future research directions.« less
Seismic assessment of a multi-span steel railway bridge in Turkey based on nonlinear time history
NASA Astrophysics Data System (ADS)
Yılmaz, Mehmet F.; Çağlayan, Barlas Ö.
2018-01-01
Many research studies have shown that bridges are vulnerable to earthquakes, graphically confirmed by incidents such as the San Fernando (1971 USA), Northridge (1994 USA), Great Hanshin (1995 Japan), and Chi-Chi (1999 Taiwan) earthquakes, amongst many others. The studies show that fragility curves are useful tools for bridge seismic risk assessments, which can be generated empirically or analytically. Empirical fragility curves can be generated where damage reports from past earthquakes are available, but otherwise, analytical fragility curves can be generated from structural seismic response analysis. Earthquake damage data in Turkey are very limited, hence this study employed an analytical method to generate fragility curves for the Alasehir bridge. The Alasehir bridge is part of the Manisa-Uşak-Dumlupınar-Afyon railway line, which is very important for human and freight transportation, and since most of the country is seismically active, it is essential to assess the bridge's vulnerability. The bridge consists of six 30 m truss spans with a total span 189 m supported by 2 abutments and 5 truss piers, 12.5, 19, 26, 33, and 40 m. Sap2000 software was used to model the Alasehir bridge, which was refined using field measurements, and the effect of 60 selected real earthquake data analyzed using the refined model, considering material and geometry nonlinearity. Thus, the seismic behavior of Alasehir railway bridge was determined and truss pier reaction and displacements were used to determine its seismic performance. Different intensity measures were compared for efficiency, practicality, and sufficiency and their component and system fragility curves derived.
Horbowy, Jan; Tomczak, Maciej T
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Horbowy, Jan
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850
Multi-hazard national-level risk assessment in Africa using global approaches
NASA Astrophysics Data System (ADS)
Fraser, Stuart; Jongman, Brenden; Simpson, Alanna; Murnane, Richard
2016-04-01
In recent years Sub-Saharan Africa has been characterized by unprecedented opportunity for transformation and sustained growth. However, natural disasters such as droughts, floods, cyclones, earthquakes, landslides, volcanic eruptions and extreme temperatures cause significant economic and human losses, and major development challenges. Quantitative disaster risk assessments are an important basis for governments to understand disaster risk in their country, and to develop effective risk management and risk financing solutions. However, the data-scarce nature of many Sub-Saharan African countries as well as a lack of financing for risk assessments has long prevented detailed analytics. Recent advances in globally applicable disaster risk modelling practices and data availability offer new opportunities. In December 2013 the European Union approved a € 60 million contribution to support the development of an analytical basis for risk financing and to accelerate the effective implementation of a comprehensive disaster risk reduction. The World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR) was selected as the implementing partner of the Program for Result Area 5: the "Africa Disaster Risk Assessment and Financing Program." As part of this effort, the GFDRR is overseeing the production of national-level multi-hazard risk profiles for a range of countries in Sub-Saharan Africa, using a combination of national and global datasets and state-of-the-art hazard and risk assessment methodologies. In this presentation, we will highlight the analytical approach behind these assessments, and show results for the first five countries for which the assessment has been completed (Kenya, Uganda, Senegal, Niger and Ethiopia). The presentation will also demonstrate the visualization of the risk assessments into understandable and visually attractive risk profile documents.
Foil system fatigue load environments for commercial hydrofoil operation
NASA Technical Reports Server (NTRS)
Graves, D. L.
1979-01-01
The hydrofoil fatigue loads environment in the open sea is examined. The random nature of wave orbital velocities, periods and heights plus boat heading, speed and control system design are considered in the assessment of structural fatigue requirements. Major nonlinear load events such as hull slamming and foil unwetting are included in the fatigue environment. Full scale rough water load tests, field experience plus analytical loads work on the model 929 Jetfoil commercial hydrofoil are discussed. The problem of developing an overall sea environment for design is defined. State of the art analytical approaches are examined.
Median of patient results as a tool for assessment of analytical stability.
Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György
2015-06-15
In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Early-Life Nutrition and Neurodevelopment: Use of the Piglet as a Translational Model12
Mudd, Austin T
2017-01-01
Optimal nutrition early in life is critical to ensure proper structural and functional development of infant organ systems. Although pediatric nutrition historically has emphasized research on the relation between nutrition, growth rates, and gastrointestinal maturation, efforts increasingly have focused on how nutrition influences neurodevelopment. The provision of human milk is considered the gold standard in pediatric nutrition; thus, there is interest in understanding how functional nutrients and bioactive components in milk may modulate developmental processes. The piglet has emerged as an important translational model for studying neurodevelopmental outcomes influenced by pediatric nutrition. Given the comparable nutritional requirements and strikingly similar brain developmental patterns between young pigs and humans, the piglet is being used increasingly in developmental nutritional neuroscience studies. The piglet primarily has been used to assess the effects of dietary fatty acids and their accretion in the brain throughout neurodevelopment. However, recent research indicates that other dietary components, including choline, iron, cholesterol, gangliosides, and sialic acid, among other compounds, also affect neurodevelopment in the pig model. Moreover, novel analytical techniques, including but not limited to MRI, behavioral assessments, and molecular quantification, allow for a more holistic understanding of how nutrition affects neurodevelopmental patterns. By combining early-life nutritional interventions with innovative analytical approaches, opportunities abound to quantify factors affecting neurodevelopmental trajectories in the neonate. This review discusses research using the translational pig model with primary emphasis on early-life nutrition interventions assessing neurodevelopment outcomes, while also discussing nutritionally-sensitive methods to characterize brain maturation. PMID:28096130
NSTS Orbiter auxiliary power unit turbine wheel cracking risk assessment
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mcclung, R. C.; Torng, T. Y.
1992-01-01
The present investigation of turbine-wheel cracking problems in the hydrazine-fueled APU turbine wheel of the Space Shuttle Orbiter's Main Engines has indicated the efficacy of systematic probabilistic risk assessment in flight certification and safety resolution. Nevertheless, real crack-initiation and propagation problems do not lend themselves to purely analytical studies. The high-cycle fatigue problem is noted to generally be unsuited to probabilistic modeling, due to its extremely high degree of intrinsic scatter. In the case treated, the cracks appear to trend toward crack arrest in a low cycle fatigue mode, due to a detuning of the resonance model.
Solute source depletion control of forward and back diffusion through low-permeability zones
NASA Astrophysics Data System (ADS)
Yang, Minjune; Annable, Michael D.; Jawitz, James W.
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.
Solute source depletion control of forward and back diffusion through low-permeability zones.
Yang, Minjune; Annable, Michael D; Jawitz, James W
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.
Hybrid-Wing-Body Vehicle Composite Fuselage Analysis and Case Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2014-01-01
Recent progress in the structural analysis of a Hybrid Wing-Body (HWB) fuselage concept is presented with the objective of structural weight reduction under a set of critical design loads. This pressurized efficient HWB fuselage design is presently being investigated by the NASA Environmentally Responsible Aviation (ERA) project in collaboration with the Boeing Company, Huntington Beach. The Pultruded Rod-Stiffened Efficient Unitized Structure (PRSEUS) composite concept, developed at the Boeing Company, is approximately modeled for an analytical study and finite element analysis. Stiffened plate linear theories are employed for a parametric case study. Maximum deflection and stress levels are obtained with appropriate assumptions for a set of feasible stiffened panel configurations. An analytical parametric case study is presented to examine the effects of discrete stiffener spacing and skin thickness on structural weight, deflection and stress. A finite-element model (FEM) of an integrated fuselage section with bulkhead is developed for an independent assessment. Stress analysis and scenario based case studies are conducted for design improvement. The FEM model specific weight of the improved fuselage concept is computed and compared to previous studies, in order to assess the relative weight/strength advantages of this advanced composite airframe technology
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Coetzee, L M; Cassim, N; Glencross, D K
2015-12-16
The CD4 integrated service delivery model (ITSDM) provides for reasonable access to pathology services across South Africa (SA) by offering three new service tiers that extend services into remote, under-serviced areas. ITSDM identified Pixley ka Seme as such an under-serviced district. To address the poor service delivery in this area, a new ITSDM community (tier 3) laboratory was established in De Aar, SA. Laboratory performance and turnaround time (TAT) were monitored post implementation to assess the impact on local service delivery. Using the National Health Laboratory Service Corporate Data Warehouse, CD4 data were extracted for the period April 2012-July 2013 (n=11,964). Total mean TAT (in hours) was calculated and pre-analytical and analytical components assessed. Ongoing testing volumes, as well as external quality assessment performance across ten trials, were used to indicate post-implementation success. Data were analysed using Stata 12. Prior to the implementation of CD4 testing at De Aar, the total mean TAT was 20.5 hours. This fell to 8.2 hours post implementation, predominantly as a result of a lower pre-analytical mean TAT reducing from a mean of 18.9 to 1.8 hours. The analytical testing TAT remained unchanged after implementation and monthly test volumes increased by up to 20%. External quality assessment indicated adequate performance. Although subjective, questionnaires sent to facilities reported improved service delivery. Establishing CD4 testing in a remote community laboratory substantially reduces overall TAT. Additional community CD4 laboratories should be established in under-serviced areas, especially where laboratory infrastructure is already in place.
Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav
2015-03-06
The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
Assessment of Critical-Analytic Thinking
ERIC Educational Resources Information Center
Brown, Nathaniel J.; Afflerbach, Peter P.; Croninger, Robert G.
2014-01-01
National policy and standards documents, including the National Assessment of Educational Progress frameworks, the "Common Core State Standards" and the "Next Generation Science Standards," assert the need to assess critical-analytic thinking (CAT) across subject areas. However, assessment of CAT poses several challenges for…
Analytical assessment of woven fabrics under vertical stabbing - The role of protective clothing.
Hejazi, Sayyed Mahdi; Kadivar, Nastaran; Sajjadi, Ali
2016-02-01
Knives are being used more commonly in street fights and muggings. Therefore, this work presents an analytical model for woven fabrics under vertical stabbing loads. The model is based on energy method and the fabric is assumed to be unidirectional comprised of N layers. Thus, the ultimate stab resistance of fabric was determined based on structural parameters of fabric and geometrical characteristics of blade. Moreover, protective clothing is nowadays considered as a strategic branch in technical textile industry. The main idea of the present work is improving the stab resistance of woven textiles by using metal coating method. In the final, a series of vertical stabbing tests were conducted on cotton, polyester and polyamide fabrics. Consequently, it was found that the model predicts with a good accuracy the ultimate stab resistance of the sample fabrics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Melo-Bernal, W; Chernov, V; Chernov, G; Barboza-Flores, M
2018-08-01
In this study, an analytical model for the assessment of the modification of cell culture survival under ionizing radiation assisted with nanoparticles (NPs) is presented. The model starts from the radial dose deposition around a single NP, which is used to describe the dose deposition in a cell structure with embedded NPs and, in turn, to evaluate the number of lesions formed by ionizing radiation. The model is applied to the calculation of relative biological effectiveness values for cells exposed to 0.5mg/g of uniformly dispersed NPs with a radius of 10nm made of Fe, I, Gd, Hf, Pt and Au and irradiated with X-rays of energies 20keV higher than the element K-shell binding energy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.
Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen
2015-10-01
Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.
NAPL source zone depletion model and its application to railroad-tank-car spills.
Marruffo, Amanda; Yoon, Hongkyu; Schaeffer, David J; Barkan, Christopher P L; Saat, Mohd Rapik; Werth, Charles J
2012-01-01
We developed a new semi-analytical source zone depletion model (SZDM) for multicomponent light nonaqueous phase liquids (LNAPLs) and incorporated this into an existing screening model for estimating cleanup times for chemical spills from railroad tank cars that previously considered only single-component LNAPLs. Results from the SZDM compare favorably to those from a three-dimensional numerical model, and from another semi-analytical model that does not consider source zone depletion. The model was used to evaluate groundwater contamination and cleanup times for four complex mixtures of concern in the railroad industry. Among the petroleum hydrocarbon mixtures considered, the cleanup time of diesel fuel was much longer than E95, gasoline, and crude oil. This is mainly due to the high fraction of low solubility components in diesel fuel. The results demonstrate that the updated screening model with the newly developed SZDM is computationally efficient, and provides valuable comparisons of cleanup times that can be used in assessing the health and financial risk associated with chemical mixture spills from railroad-tank-car accidents. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.
Analytical-HZETRN Model for Rapid Assessment of Active Magnetic Radiation Shielding
NASA Technical Reports Server (NTRS)
Washburn, S. A.; Blattnig, S. R.; Singleterry, R. C.; Westover, S. C.
2014-01-01
The use of active radiation shielding designs has the potential to reduce the radiation exposure received by astronauts on deep-space missions at a significantly lower mass penalty than designs utilizing only passive shielding. Unfortunately, the determination of the radiation exposure inside these shielded environments often involves lengthy and computationally intensive Monte Carlo analysis. In order to evaluate the large trade space of design parameters associated with a magnetic radiation shield design, an analytical model was developed for the determination of flux inside a solenoid magnetic field due to the Galactic Cosmic Radiation (GCR) radiation environment. This analytical model was then coupled with NASA's radiation transport code, HZETRN, to account for the effects of passive/structural shielding mass. The resulting model can rapidly obtain results for a given configuration and can therefore be used to analyze an entire trade space of potential variables in less time than is required for even a single Monte Carlo run. Analyzing this trade space for a solenoid magnetic shield design indicates that active shield bending powers greater than 15 Tm and passive/structural shielding thicknesses greater than 40 g/cm2 have a limited impact on reducing dose equivalent values. Also, it is shown that higher magnetic field strengths are more effective than thicker magnetic fields at reducing dose equivalent.
NASA Astrophysics Data System (ADS)
Davis, Paul B.; Abidi, Mongi A.
1989-05-01
PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.
Assessment and prediction of drying shrinkage cracking in bonded mortar overlays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beushausen, Hans, E-mail: hans.beushausen@uct.ac.za; Chilwesa, Masuzyo
2013-11-15
Restrained drying shrinkage cracking was investigated on composite beams consisting of substrate concrete and bonded mortar overlays, and compared to the performance of the same mortars when subjected to the ring test. Stress development and cracking in the composite specimens were analytically modeled and predicted based on the measurement of relevant time-dependent material properties such as drying shrinkage, elastic modulus, tensile relaxation and tensile strength. Overlay cracking in the composite beams could be very well predicted with the analytical model. The ring test provided a useful qualitative comparison of the cracking performance of the mortars. The duration of curing wasmore » found to only have a minor influence on crack development. This was ascribed to the fact that prolonged curing has a beneficial effect on tensile strength at the onset of stress development, but is in the same time not beneficial to the values of tensile relaxation and elastic modulus. -- Highlights: •Parameter study on material characteristics influencing overlay cracking. •Analytical model gives good quantitative indication of overlay cracking. •Ring test presents good qualitative indication of overlay cracking. •Curing duration has little effect on overlay cracking.« less
Assessment of passive drag in swimming by numerical simulation and analytical procedure.
Barbosa, Tiago M; Ramos, Rui; Silva, António J; Marinho, Daniel A
2018-03-01
The aim was to compare the passive drag-gliding underwater by a numerical simulation and an analytical procedure. An Olympic swimmer was scanned by computer tomography and modelled gliding at a 0.75-m depth in the streamlined position. Steady-state computer fluid dynamics (CFD) analyses were performed on Fluent. A set of analytical procedures was selected concurrently. Friction drag (D f ), pressure drag (D pr ), total passive drag force (D f +pr ) and drag coefficient (C D ) were computed between 1.3 and 2.5 m · s -1 by both techniques. D f +pr ranged from 45.44 to 144.06 N with CFD, from 46.03 to 167.06 N with the analytical procedure (differences: from 1.28% to 13.77%). C D ranged between 0.698 and 0.622 by CFD, 0.657 and 0.644 by analytical procedures (differences: 0.40-6.30%). Linear regression models showed a very high association for D f +pr plotted in absolute values (R 2 = 0.98) and after log-log transformation (R 2 = 0.99). The C D also obtained a very high adjustment for both absolute (R 2 = 0.97) and log-log plots (R 2 = 0.97). The bias for the D f +pr was 8.37 N and 0.076 N after logarithmic transformation. D f represented between 15.97% and 18.82% of the D f +pr by the CFD, 14.66% and 16.21% by the analytical procedures. Therefore, despite the bias, analytical procedures offer a feasible way of gathering insight on one's hydrodynamics characteristics.
NASA Astrophysics Data System (ADS)
Jankovic, I.; Barnes, R. J.; Soule, R.
2001-12-01
The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.
On the Complexity of Item Response Theory Models.
Bonifay, Wes; Cai, Li
2017-01-01
Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
2018-03-20
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Buckling Imperfection Sensitivity of Axially Compressed Orthotropic Cylinders
NASA Technical Reports Server (NTRS)
Schultz, Marc R.; Nemeth, Michael P.
2010-01-01
Structural stability is a major consideration in the design of lightweight shell structures. However, the theoretical predictions of geometrically perfect structures often considerably over predict the buckling loads of inherently imperfect real structures. It is reasonably well understood how the shell geometry affects the imperfection sensitivity of axially compressed cylindrical shells; however, the effects of shell anisotropy on the imperfection sensitivity is less well understood. In the present paper, the development of an analytical model for assessing the imperfection sensitivity of axially compressed orthotropic cylinders is discussed. Results from the analytical model for four shell designs are compared with those from a general-purpose finite-element code, and good qualitative agreement is found. Reasons for discrepancies are discussed, and potential design implications of this line of research are discussed.
Whelan, Jessica; Craven, Stephen; Glennon, Brian
2012-01-01
In this study, the application of Raman spectroscopy to the simultaneous quantitative determination of glucose, glutamine, lactate, ammonia, glutamate, total cell density (TCD), and viable cell density (VCD) in a CHO fed-batch process was demonstrated in situ in 3 L and 15 L bioreactors. Spectral preprocessing and partial least squares (PLS) regression were used to correlate spectral data with off-line reference data. Separate PLS calibration models were developed for each analyte at the 3 L laboratory bioreactor scale before assessing its transferability to the same bioprocess conducted at the 15 L pilot scale. PLS calibration models were successfully developed for all analytes bar VCD and transferred to the 15 L scale. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
Du, Lihong; White, Robert L
2009-02-01
A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.
[Environmental quality assessment of regional agro-ecosystem in Loess Plateau].
Wang, Limei; Meng, Fanping; Zheng, Jiyong; Wang, Zhonglin
2004-03-01
Based on the detection and analysis of the contamination status of agro-ecosystem with apple-crops intercropping as the dominant cropping model in Loess Plateau, the individual factor and comprehensive environmental quality were assessed by multilevel fuzzy synthetic evaluation model, analytical hierarchy process(AHP), and improved standard weight deciding method. The results showed that the quality of soil, water and agricultural products was grade I, the social economical environmental quality was grade II, the ecological environmental quality was grade III, and the comprehensive environmental quality was grade I. The regional agro-ecosystem dominated by apple-crops intercropping was not the best model for the ecological benefits, but had the better social economical benefits.
Load Diffusion in Composite Structures
NASA Technical Reports Server (NTRS)
Horgan, Cornelius O.; Simmonds, J. G.
2000-01-01
This research has been concerned with load diffusion in composite structures. Fundamental solid mechanics studies were carried out to provide a basis for assessing the complicated modeling necessary for large scale structures used by NASA. An understanding of the fundamental mechanisms of load diffusion in composite subcomponents is essential in developing primary composite structures. Analytical models of load diffusion behavior are extremely valuable in building an intuitive base for developing refined modeling strategies and assessing results from finite element analyses. The decay behavior of stresses and other field quantities provides a significant aid towards this process. The results are also amendable to parameter study with a large parameter space and should be useful in structural tailoring studies.
Assessing the Effectiveness of Ramp-Up During Sonar Operations Using Exposure Models.
von Benda-Beckmann, Alexander M; Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; Miller, Patrick J O; Tyack, Peter L; Ainslie, Michael A
2016-01-01
Ramp-up procedures are used to mitigate the impact of sound on marine mammals. Sound exposure models combined with observations of marine mammals responding to sound can be used to assess the effectiveness of ramp-up procedures. We found that ramp-up procedures before full-level sonar operations can reduce the risk of hearing threshold shifts with marine mammals, but their effectiveness depends strongly on the responsiveness of the animals. In this paper, we investigated the effect of sonar parameters (source level, pulse-repetition time, ship speed) on sound exposure by using a simple analytical model and highlight the mechanisms that limit the effectiveness of ramp-up procedures.
NASA Astrophysics Data System (ADS)
Zarlenga, Antonio; de Barros, Felipe; Fiori, Aldo
2017-04-01
Predicting solutes displacement in ecosystems is a complex task because of heterogeneity of hydrogeological properties and limited financial resources for characterization. As a consequence, solute transport model predictions are subject to uncertainty and probabilistic methods are invoked. Despite the significant theoretical advances in subsurface hydrology, there is a compelling need to transfer those specialized know-hows into an easy-to-use practical tool. The deterministic approach is able to capture some features of the transport behavior but its adoption in practical applications (e.g. remediation strategies or health risk assessment) is often inadequate because of its inability to accurately model the phenomena triggered by the spatial heterogeneity. The rigorous evaluation of the local contaminant concentration in natural aquifers requires an accurate estimate of the domain properties and huge computational times; those aspects limit the adoption of fully 3D numerical models. In this presentation, we illustrate a physically-based methodology to analytically estimate of the statistics of the solute concentration in natural aquifers and the related health risk. Our methodology aims to provide a simple tool for a quick assessment of the contamination level in aquifers, as function of a few relevant, physically based parameters such as the log conductivity variance, the mean flow velocity, the Péclet number. Solutions of the 3D analytical model adopt the results of previous works: transport model is based on the solutions proposed by Zarlenga and Fiori (2013, 2014) where semi-analytical relations for the statics of local contaminant concentration are carry out through a Lagrangian first-order model. As suggested in de Barros and Fiori (2014), the Beta distribution is assumed for the concentration cumulative density function (CDF). We illustrate the use of the closed-form equations for the probability of local contaminant concentration and health risk in a series of problems of practical relevance. The basic scenario is constituted by a steady state plume in a 3D heterogeneous formation. In this case the non-reactive transport is ruled by interplay of the spreading (lateral and vertical) and dilution. The second scenario considers two different dynamics of degradation in aerobic and anaerobic conditions which allows the contaminant abatement. The final example links the environmental concentration with adverse health effects. For this case, additional information on toxicological and behavioral parameters are required. Despite the simplifying assumptions adopted, the proposed solutions are appealing in applications due to their simplicity and the fact that they allow to easily propagate the uncertainty from different sources in the final risk endpoint. de Barros, F.P., Fiori, A., 2014. First-order based cumulative distribution function for solute concentration in heterogeneous aquifers: theoretical analysis and implications for human health risk assessment. Water Resour. Res. 50, 4018-4037. Zarlenga, A., Fiori, A., 2013. Steady plumes in heterogeneous porous formations: a stochastic lagrangian approach. Water Resour. Res. 49, 864-873. Zarlenga, A., Fiori, A., 2014. Stochastic analytical modeling of the biodegradation of steady plumes. J. Contam. Hydrol. 157, 106-116.
2011-01-01
Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441
Dynamic mobility applications analytical needs assessment.
DOT National Transportation Integrated Search
2012-07-01
Dynamic Mobility Applications Analytical Needs Assessment was a one-year project (July 2011 to July 2012) to develop a strategy for assessing the potential impact of twenty-eight applications for improved mobility across national transportation syste...
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
NAFLA - Ein Simulationswerkzeug zur analytischen Abschätzung von Schadstofffahnenlängen
NASA Astrophysics Data System (ADS)
Kumar Yadav, Prabhas; Händel, Falk; Müller, Christian; Liedl, Rudolf; Dietrich, Peter
2013-03-01
Groundwater pollution with organic contaminants remains a world-wide problem. Before selection of any remediation technique, it is important to pre-assess contaminated sites with respect to their hazard. For this, several analytical and numerical approaches have been used and an initial assessment of contaminated sites the MS-Excel© tool "NAFLA" was developed. "NAFLA" allows a quick and straightforward calculation and comparison of some analytical approaches for the estimation of maximum plume length under steady-state conditions. These approaches differ from each other in source geometry, model domain orientation, and in the consideration of (bio)chemical reaction within the domain. In this communication, we provide details about the development of "NAFLA", its possible usage and information for users. The tool is especially designed for application in student education, by authorities and consultants.
NASA Astrophysics Data System (ADS)
Rozov, V.; Alekseev, A.
2015-08-01
A necessity to address a wide spectrum of engineering problems in ITER determined the need for efficient tools for modeling of the magnetic environment and force interactions between the main components of the magnet system. The assessment of the operating window for the machine, determined by the electro-magnetic (EM) forces, and the check of feasibility of particular scenarios play an important role for ensuring the safety of exploitation. Such analysis-powered prevention of damages forms an element of the Machine Operations and Investment Protection strategy. The corresponding analysis is a necessary step in preparation of the commissioning, which finalizes the construction phase. It shall be supported by the development of the efficient and robust simulators and multi-physics/multi-system integration of models. The developed numerical model of interactions in the ITER magnetic system, based on the use of pre-computed influence matrices, facilitated immediate and complete assessment and systematic specification of EM loads on magnets in all foreseen operating regimes, their maximum values, envelopes and the most critical scenarios. The common principles of interaction in typical bilateral configurations have been generalized for asymmetry conditions, inspired by the plasma and by the hardware, including asymmetric plasma event and magnetic system fault cases. The specification of loads is supported by the technology of functional approximation of nodal and distributed data by continuous patterns/analytical interpolants. The global model of interactions together with the mesh-independent analytical format of output provides the source of self-consistent and transferable data on the spatial distribution of the system of forces for assessments of structural performance of the components, assemblies and supporting structures. The numerical model used is fully parametrized, which makes it very suitable for multi-variant and sensitivity studies (positioning, off-normal events, asymmetry, etc). The obtained results and matrices form a basis for a relatively simple and robust force processor as a specialized module of a global simulator for diagnostic, operational instrumentation, monitoring and control, as well as a scenario assessment tool. This paper gives an overview of the model, applied technique, assessed problems and obtained qualitative and quantitative results.
Rapid B-rep model preprocessing for immersogeometric analysis using analytic surfaces
Wang, Chenglong; Xu, Fei; Hsu, Ming-Chen; Krishnamurthy, Adarsh
2017-01-01
Computational fluid dynamics (CFD) simulations of flow over complex objects have been performed traditionally using fluid-domain meshes that conform to the shape of the object. However, creating shape conforming meshes for complicated geometries like automobiles require extensive geometry preprocessing. This process is usually tedious and requires modifying the geometry, including specialized operations such as defeaturing and filling of small gaps. Hsu et al. (2016) developed a novel immersogeometric fluid-flow method that does not require the generation of a boundary-fitted mesh for the fluid domain. However, their method used the NURBS parameterization of the surfaces for generating the surface quadrature points to enforce the boundary conditions, which required the B-rep model to be converted completely to NURBS before analysis can be performed. This conversion usually leads to poorly parameterized NURBS surfaces and can lead to poorly trimmed or missing surface features. In addition, converting simple geometries such as cylinders to NURBS imposes a performance penalty since these geometries have to be dealt with as rational splines. As a result, the geometry has to be inspected again after conversion to ensure analysis compatibility and can increase the computational cost. In this work, we have extended the immersogeometric method to generate surface quadrature points directly using analytic surfaces. We have developed quadrature rules for all four kinds of analytic surfaces: planes, cones, spheres, and toroids. We have also developed methods for performing adaptive quadrature on trimmed analytic surfaces. Since analytic surfaces have frequently been used for constructing solid models, this method is also faster to generate quadrature points on real-world geometries than using only NURBS surfaces. To assess the accuracy of the proposed method, we perform simulations of a benchmark problem of flow over a torpedo shape made of analytic surfaces and compare those to immersogeometric simulations of the same model with NURBS surfaces. We also compare the results of our immersogeometric method with those obtained using boundary-fitted CFD of a tessellated torpedo shape, and quantities of interest such as drag coefficient are in good agreement. Finally, we demonstrate the effectiveness of our immersogeometric method for high-fidelity industrial scale simulations by performing an aerodynamic analysis of a truck that has a large percentage of analytic surfaces. Using analytic surfaces over NURBS avoids unnecessary surface type conversion and significantly reduces model-preprocessing time, while providing the same accuracy for the aerodynamic quantities of interest. PMID:29051678
Pfeiffer, Christine M; Looker, Anne C
2017-12-01
Biochemical assessment of iron status relies on serum-based indicators, such as serum ferritin (SF), transferrin saturation, and soluble transferrin receptor (sTfR), as well as erythrocyte protoporphyrin. These indicators present challenges for clinical practice and national nutrition surveys, and often iron status interpretation is based on the combination of several indicators. The diagnosis of iron deficiency (ID) through SF concentration, the most commonly used indicator, is complicated by concomitant inflammation. sTfR concentration is an indicator of functional ID that is not an acute-phase reactant, but challenges in its interpretation arise because of the lack of assay standardization, common reference ranges, and common cutoffs. It is unclear which indicators are best suited to assess excess iron status. The value of hepcidin, non-transferrin-bound iron, and reticulocyte indexes is being explored in research settings. Serum-based indicators are generally measured on fully automated clinical analyzers available in most hospitals. Although international reference materials have been available for years, the standardization of immunoassays is complicated by the heterogeneity of antibodies used and the absence of physicochemical reference methods to establish "true" concentrations. From 1988 to 2006, the assessment of iron status in NHANES was based on the multi-indicator ferritin model. However, the model did not indicate the severity of ID and produced categorical estimates. More recently, iron status assessment in NHANES has used the total body iron stores (TBI) model, in which the log ratio of sTfR to SF is assessed. Together, sTfR and SF concentrations cover the full range of iron status. The TBI model better predicts the absence of bone marrow iron than SF concentration alone, and TBI can be analyzed as a continuous variable. Additional consideration of methodologies, interpretation of indicators, and analytic standardization is important for further improvements in iron status assessment. © 2017 American Society for Nutrition.
NASA Astrophysics Data System (ADS)
Phanikumar, Mantha S.; McGuire, Jennifer T.
2010-08-01
Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov
2011-12-15
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less
Rao, Shalinee; Masilamani, Suresh; Sundaram, Sandhya; Duvuru, Prathiba; Swaminathan, Rajendiran
2016-01-01
Quality monitoring in histopathology unit is categorized into three phases, pre-analytical, analytical and post-analytical, to cover various steps in the entire test cycle. Review of literature on quality evaluation studies pertaining to histopathology revealed that earlier reports were mainly focused on analytical aspects with limited studies on assessment of pre-analytical phase. Pre-analytical phase encompasses several processing steps and handling of specimen/sample by multiple individuals, thus allowing enough scope for errors. Due to its critical nature and limited studies in the past to assess quality in pre-analytical phase, it deserves more attention. This study was undertaken to analyse and assess the quality parameters in pre-analytical phase in a histopathology laboratory. This was a retrospective study done on pre-analytical parameters in histopathology laboratory of a tertiary care centre on 18,626 tissue specimens received in 34 months. Registers and records were checked for efficiency and errors for pre-analytical quality variables: specimen identification, specimen in appropriate fixatives, lost specimens, daily internal quality control performance on staining, performance in inter-laboratory quality assessment program {External quality assurance program (EQAS)} and evaluation of internal non-conformities (NC) for other errors. The study revealed incorrect specimen labelling in 0.04%, 0.01% and 0.01% in 2007, 2008 and 2009 respectively. About 0.04%, 0.07% and 0.18% specimens were not sent in fixatives in 2007, 2008 and 2009 respectively. There was no incidence of specimen lost. A total of 113 non-conformities were identified out of which 92.9% belonged to the pre-analytical phase. The predominant NC (any deviation from normal standard which may generate an error and result in compromising with quality standards) identified was wrong labelling of slides. Performance in EQAS for pre-analytical phase was satisfactory in 6 of 9 cycles. A low incidence of errors in pre-analytical phase implies that a satisfactory level of quality standards was being practiced with still scope for improvement.
MPD Thruster Performance Analytic Models
NASA Technical Reports Server (NTRS)
Gilland, James; Johnston, Geoffrey
2003-01-01
Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.
MPD Thruster Performance Analytic Models
NASA Technical Reports Server (NTRS)
Gilland, James; Johnston, Geoffrey
2007-01-01
Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.
WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL
The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Strategic analytics: towards fully embedding evidence in healthcare decision-making.
Garay, Jason; Cartagena, Rosario; Esensoy, Ali Vahit; Handa, Kiren; Kane, Eli; Kaw, Neal; Sadat, Somayeh
2015-01-01
Cancer Care Ontario (CCO) has implemented multiple information technology solutions and collected health-system data to support its programs. There is now an opportunity to leverage these data and perform advanced end-to-end analytics that inform decisions around improving health-system performance. In 2014, CCO engaged in an extensive assessment of its current data capacity and capability, with the intent to drive increased use of data for evidence-based decision-making. The breadth and volume of data at CCO uniquely places the organization to contribute to not only system-wide operational reporting, but more advanced modelling of current and future state system management and planning. In 2012, CCO established a strategic analytics practice to assist the agency's programs contextualize and inform key business decisions and to provide support through innovative predictive analytics solutions. This paper describes the organizational structure, services and supporting operations that have enabled progress to date, and discusses the next steps towards the vision of embedding evidence fully into healthcare decision-making. Copyright © 2014 Longwoods Publishing.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
Longitudinal Cross-Gender Factorial Invariance of the Academic Motivation Scale
ERIC Educational Resources Information Center
Grouzet, Frederick M. E.; Otis, Nancy; Pelletier, Luc G.
2006-01-01
This study examined the measurement and latent construct invariance of the Academic Motivation Scale (Vallerand, Blais, Brier, & Pelletier, 1989; Vallerand et al., 1992, 1993) across both gender and time. An integrative analytical strategy was used to assess in one set of nested models both longitudinal and cross-gender invariance, and…
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
ERIC Educational Resources Information Center
Podschuweit, Sören; Bernholt, Sascha; Brückmann, Maja
2016-01-01
Background: Complexity models have provided a suitable framework in various domains to assess students' educational achievement. Complexity is often used as the analytical focus when regarding learning outcomes, i.e. when analyzing written tests or problem-centered interviews. Numerous studies reveal negative correlations between the complexity of…
Note: equation of state and the freezing point in the hard-sphere model.
Robles, Miguel; López de Haro, Mariano; Santos, Andrés
2014-04-07
The merits of different analytical equations of state for the hard-sphere system with respect to the recently computed high-accuracy value of the freezing-point packing fraction are assessed. It is found that the Carnahan-Starling-Kolafa and the branch-point approximant equations of state yield the best performance.
Dynamic Assessment of Water Quality Based on a Variable Fuzzy Pattern Recognition Model
Xu, Shiguo; Wang, Tianxiang; Hu, Suduan
2015-01-01
Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results. PMID:25689998
Dynamic assessment of water quality based on a variable fuzzy pattern recognition model.
Xu, Shiguo; Wang, Tianxiang; Hu, Suduan
2015-02-16
Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results.
Wang, Xiao; Esquerre, Carlos; Downey, Gerard; Henihan, Lisa; O'Callaghan, Donal; O'Donnell, Colm
2018-06-01
In this study, visible and near-infrared (Vis-NIR), mid-infrared (MIR) and Raman process analytical technologies were investigated for assessment of infant formula quality and compositional parameters namely preheat temperature, storage temperature, storage time, fluorescence of advanced Maillard products and soluble tryptophan (FAST) index, soluble protein, fat and surface free fat (SFF) content. PLS-DA models developed using spectral data with appropriate data pre-treatment and significant variables selected using Martens' uncertainty test had good accuracy for the discrimination of preheat temperature (92.3-100%) and storage temperature (91.7-100%). The best PLS regression models developed yielded values for the ratio of prediction error to deviation (RPD) of 3.6-6.1, 2.1-2.7, 1.7-2.9, 1.6-2.6 and 2.5-3.0 for storage time, FAST index, soluble protein, fat and SFF content prediction respectively. Vis-NIR, MIR and Raman were demonstrated to be potential PAT tools for process control and quality assurance applications in infant formula and dairy ingredient manufacture. Copyright © 2018 Elsevier B.V. All rights reserved.
Lin, Hsueh-Chun; Hong, Yao-Ming; Kan, Yao-Chiang
2012-01-01
The groundwater level represents a critical factor to evaluate hillside landslides. A monitoring system upon the real-time prediction platform with online analytical functions is important to forecast the groundwater level due to instantaneously monitored data when the heavy precipitation raises the groundwater level under the hillslope and causes instability. This study is to design the backend of an environmental monitoring system with efficient algorithms for machine learning and knowledge bank for the groundwater level fluctuation prediction. A Web-based platform upon the model-view controller-based architecture is established with technology of Web services and engineering data warehouse to support online analytical process and feedback risk assessment parameters for real-time prediction. The proposed system incorporates models of hydrological computation, machine learning, Web services, and online prediction to satisfy varieties of risk assessment requirements and approaches of hazard prevention. The rainfall data monitored from the potential landslide area at Lu-Shan, Nantou and Li-Shan, Taichung, in Taiwan, are applied to examine the system design.
2017-10-24
The Food and Drug Administration (FDA or we) is classifying the device to detect and measure non-microbial analyte(s) in human clinical specimens to aid in assessment of patients with suspected sepsis into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the device to detect and measure non-microbial analyte(s) in human clinical specimens to aid in assessment of patients with suspected sepsis's classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.
NASA Astrophysics Data System (ADS)
Zuiker, Steven; Reid Whitaker, J.
2014-04-01
This paper describes the 5E+I/A inquiry model and reports a case study of one curricular enactment by a US fifth-grade classroom. A literature review establishes the model's conceptual adequacy with respect to longstanding research related to both the 5E inquiry model and multiple, incremental innovations of it. As a collective line of research, the review highlights a common emphasis on formative assessment, at times coupled either with differentiated instruction strategies or with activities that target the generalization of learning. The 5E+I/A model contributes a multi-level assessment strategy that balances formative and summative functions of multiple forms of assessment in order to support classroom participation while still attending to individual achievement. The case report documents the enactment of a weeklong 5E+I/A curricular design as a preliminary account of the model's empirical adequacy. A descriptive and analytical narrative illustrates variable ways that multi-level assessment makes student thinking visible and pedagogical decision-making more powerful. In light of both, it also documents productive adaptations to a flexible curricular design and considers future research to advance this collective line of inquiry.
Geobiochemistry of metabolism: Standard state thermodynamic properties of the citric acid cycle
NASA Astrophysics Data System (ADS)
Canovas, Peter A.; Shock, Everett L.
2016-12-01
Integrating microbial metabolism into geochemical modeling allows assessments of energy and mass transfer between the geosphere and the microbial biosphere. Energy and power supplies and demands can be assessed from analytical geochemical data given thermodynamic data for compounds involved in catabolism and anabolism. Results are reported here from a critique of the available standard state thermodynamic data for organic acids and acid anions involved in the citric acid cycle (also known as the tricarboxylic acid cycle or the Krebs cycle). The development of methods for estimating standard state data unavailable from experiments is described, together with methods to predict corresponding values at elevated temperatures and pressures using the revised Helgeson-Kirkham-Flowers (HKF) equation of state for aqueous species. Internal consistency is maintained with standard state thermodynamic data for organic and inorganic aqueous species commonly used in geochemical modeling efforts. Standard state data and revised-HKF parameters are used to predict equilibrium dissociation constants for the organic acids in the citric acid cycle, and to assess standard Gibbs energies of reactions for each step in the cycle at elevated temperatures and pressures. The results presented here can be used with analytical data from natural and experimental systems to assess the energy and power demands of microorganisms throughout the habitable ranges of pressure and temperature, and to assess the consequences of abiotic organic compound alteration processes at conditions of subsurface aquifers, sedimentary basins, hydrothermal systems, meteorite parent bodies, and ocean worlds throughout the solar system.
Philips, Z; Ginnelly, L; Sculpher, M; Claxton, K; Golder, S; Riemsma, R; Woolacoot, N; Glanville, J
2004-09-01
To identify existing guidelines and develop a synthesised guideline plus accompanying checklist. In addition to provide guidance on key theoretical, methodological and practical issues and consider the implications of this research for what might be expected of future decision-analytic models. Electronic databases. A systematic review of existing good practice guidelines was undertaken to identify and summarise guidelines currently available for assessing the quality of decision-analytic models that have been undertaken for health technology assessment. A synthesised good practice guidance and accompanying checklist was developed. Two specific methods areas in decision modelling were considered. The first method's topic is the identification of parameter estimates from published literature. Parameter searches were developed and piloted using a case-study model. The second topic relates to bias in parameter estimates; that is, how to adjust estimates of treatment effect from observational studies where there are risks of selection bias. A systematic literature review was conducted to identify those studies looking at quantification of bias in parameter estimates and the implication of this bias. Fifteen studies met the inclusion criteria and were reviewed and consolidated into a single set of brief statements of good practice. From this, a checklist was developed and applied to three independent decision-analytic models. Although the checklist provided excellent guidance on some key issues for model evaluation, it was too general to pick up on the specific nuances of each model. The searches that were developed helped to identify important data for inclusion in the model. However, the quality of life searches proved to be problematic: the published search filters did not focus on those measures specific to cost-effectiveness analysis and although the strategies developed as part of this project were more successful few data were found. Of the 11 studies meeting the criteria on the effect of selection bias, five concluded that a non-randomised trial design is associated with bias and six studies found 'similar' estimates of treatment effects from observational studies or non-randomised clinical trials and randomised controlled trials (RCTs). One purpose of developing the synthesised guideline and checklist was to provide a framework for critical appraisal by the various parties involved in the health technology assessment process. First, the guideline and checklist can be used by groups that are reviewing other analysts' models and, secondly, the guideline and checklist could be used by the various analysts as they develop their models (to use it as a check on how they are developing and reporting their analyses). The Expert Advisory Group (EAG) that was convened to discuss the potential role of the guidance and checklist felt that, in general, the guidance and checklist would be a useful tool, although the checklist is not meant to be used exclusively to determine a model's quality, and so should not be used as a substitute for critical appraisal. The review of current guidelines showed that although authors may provide a consistent message regarding some aspects of modelling, in other areas conflicting attributes are presented in different guidelines. In general, the checklist appears to perform well, in terms of identifying those aspects of the model that should be of particular concern to the reader. The checklist cannot, however, provide answers to the appropriateness of the model structure and structural assumptions, as these may be seen as a general problem with generic checklists and do not reflect any shortcoming with the synthesised guidance and checklist developed here. The assessment of the checklist, as well as feedback from the EAG, indicated the importance of its use in conjunction with a more general checklist or guidelines on economic evaluation. Further methods research into the following areas would be valuable: the quantification of selection bias in non-controlled studies and in controlled observational studies; the level of bias in the different non-RCT study designs; a comparison of results from RCTs with those from other non-randomised studies; assessment of the strengths and weaknesses of alternative ways to adjust for bias in a decision model; and how to prioritise searching for parameter estimates.
Survey of NASA research on crash dynamics
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Carden, H. D.; Hayduk, R. J.
1984-01-01
Ten years of structural crash dynamics research activities conducted on general aviation aircraft by the National Aeronautics and Space Administration (NASA) are described. Thirty-two full-scale crash tests were performed at Langley Research Center, and pertinent data on airframe and seat behavior were obtained. Concurrent with the experimental program, analytical methods were developed to help predict structural behavior during impact. The effects of flight parameters at impact on cabin deceleration pulses at the seat/occupant interface, experimental and analytical correlation of data on load-limiting subfloor and seat configurations, airplane section test results for computer modeling validation, and data from emergency-locator-transmitter (ELT) investigations to determine probable cause of false alarms and nonactivations are assessed. Computer programs which provide designers with analytical methods for predicting accelerations, velocities, and displacements of collapsing structures are also discussed.
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
Barani, T.; Bruschi, E.; Pizzocri, D.; ...
2017-01-03
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
Using Learning Analytics to Assess Student Learning in Online Courses
ERIC Educational Resources Information Center
Martin, Florence; Ndoye, Abdou
2016-01-01
Learning analytics can be used to enhance student engagement and performance in online courses. Using learning analytics, instructors can collect and analyze data about students and improve the design and delivery of instruction to make it more meaningful for them. In this paper, the authors review different categories of online assessments and…
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
On synchronization in power-grids modelled as networks of second-order Kuramoto oscillators
NASA Astrophysics Data System (ADS)
Grzybowski, J. M. V.; Macau, E. E. N.; Yoneyama, T.
2016-11-01
This work concerns analytical results on the role of coupling strength in the phenomenon of onset of complete frequency locking in power-grids modelled as a network of second-order Kuramoto oscillators. Those results allow estimation of the coupling strength for the onset of complete frequency locking and to assess the features of network and oscillators that favor synchronization. The analytical results are evaluated using an order parameter defined as the normalized sum of absolute values of phase deviations of the oscillators over time. The investigation of the frequency synchronization within the subsets of the parameter space involved in the synchronization problem is also carried out. It is shown that the analytical results are in good agreement with those observed in the numerical simulations. In order to illustrate the methodology, a case study is presented, involving the Brazilian high-voltage transmission system under a load peak condition to study the effect of load on the syncronizability of the grid. The results show that both the load and the centralized generation might have concurred to the 2014 blackout.
Modeling Radionuclide Decay Chain Migration Using HYDROGEOCHEM
NASA Astrophysics Data System (ADS)
Lin, T. C.; Tsai, C. H.; Lai, K. H.; Chen, J. S.
2014-12-01
Nuclear technology has been employed for energy production for several decades. Although people receive many benefits from nuclear energy, there are inevitably environmental pollutions as well as human health threats posed by the radioactive materials releases from nuclear waste disposed in geological repositories or accidental releases of radionuclides from nuclear facilities. Theoretical studies have been undertaken to understand the transport of radionuclides in subsurface environments because that the radionuclide transport in groundwater is one of the main pathway in exposure scenarios for the intake of radionuclides. The radionuclide transport in groundwater can be predicted using analytical solution as well as numerical models. In this study, we simulate the transport of the radionuclide decay chain using HYDROGEOCHEM. The simulated results are verified against the analytical solution available in the literature. Excellent agreements between the numerical simulation and the analytical are observed for a wide spectrum of concentration. HYDROGECHEM is a useful tool assessing the ecological and environmental impact of the accidental radionuclide releases such as the Fukushima nuclear disaster where multiple radionuclides leaked through the reactor, subsequently contaminating the local groundwater and ocean seawater in the vicinity of the nuclear plant.
NASA Astrophysics Data System (ADS)
Chen, J. S.; Chiang, S. Y.; Liang, C. P.
2017-12-01
It is essential to develop multispecies transport analytical models based on a set of advection-dispersion equations (ADEs) coupled with sequential first-order decay reactions for the synchronous prediction of plume migrations of both parent and its daughter species of decaying contaminants such as radionuclides, dissolved chlorinated organic compounds, pesticides and nitrogen. Although several analytical models for multispecies transport have already been reported, those currently available in the literature have primarily been derived based on ADEs with constant dispersion coefficients. However, there have been a number of studies demonstrating that the dispersion coefficients increase with the solute travel distance as a consequence of variation in the hydraulic properties of the porous media. This study presents novel analytical models for multispecies transport with distance-dependent dispersion coefficients. The correctness of the derived analytical models is confirmed by comparing them against the numerical models. Results show perfect agreement between the analytical and numerical models. Comparison of our new analytical model for multispecies transport with scale-dependent dispersion to an analytical model with constant dispersion is made to illustrate the effects of the dispersion coefficients on the multispecies transport of decaying contaminants.
Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S
2013-01-01
Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762
A flow resistance model for assessing the impact of vegetation on flood routing mechanics
NASA Astrophysics Data System (ADS)
Katul, Gabriel G.; Poggi, Davide; Ridolfi, Luca
2011-08-01
The specification of a flow resistance factor to account for vegetative effects in the Saint-Venant equation (SVE) remains uncertain and is a subject of active research in flood routing mechanics. Here, an analytical model for the flow resistance factor is proposed for submerged vegetation, where the water depth is commensurate with the canopy height and the roughness Reynolds number is sufficiently large so as to ignore viscous effects. The analytical model predicts that the resistance factor varies with three canonical length scales: the adjustment length scale that depends on the foliage drag and leaf area density, the canopy height, and the water level. These length scales can reasonably be inferred from a range of remote sensing products making the proposed flow resistance model eminently suitable for operational flood routing. Despite the numerous simplifications, agreement between measured and modeled resistance factors and bulk velocities is reasonable across a range of experimental and field studies. The proposed model asymptotically recovers the flow resistance formulation when the water depth greatly exceeds the canopy height. This analytical treatment provides a unifying framework that links the resistance factor to a number of concepts and length scales already in use to describe canopy turbulence. The implications of the coupling between the resistance factor and the water depth on solutions to the SVE are explored via a case study, which shows a reasonable match between empirical design standard and theoretical predictions.
Aronsson, T; Bjørnstad, P; Leskinen, E; Uldall, A; de Verdier, C H
1984-01-01
The aim of this investigation was primarily to assess analytical quality expressed as between-laboratory, within-laboratory, and total imprecision, not in order to detect laboratories with poor performance, but in the positive sense to provide data for improving critical steps in analytical methodology. The aim was also to establish the present state of the art in comparison with earlier investigations to see if improvement in analytical quality could be observed.
Daniels, Vijay John; Harley, Dwight
2017-07-01
Although previous research has compared checklists to rating scales for assessing communication, the purpose of this study was to compare the effect on reliability and sensitivity to level of training of an analytic, a holistic, and a combined analytic-holistic rating scale in assessing communication skills. The University of Alberta Internal Medicine Residency runs OSCEs for postgraduate year (PGY) 1 and 2 residents and another for PGY-4 residents. Communication stations were scored with an analytic scale (empathy, non-verbal skills, verbal skills, and coherence subscales) and a holistic scale. Authors analyzed reliability of individual and combined scales using generalizability theory and evaluated each scale's sensitivity to level of training. For analytic, holistic, and combined scales, 12, 12, and 11 stations respectively yielded a Phi of 0.8 for the PGY-1,2 cohort, and 16, 16, and 14 stations yielded a Phi of 0.8 for the PGY-4 cohort. PGY-4 residents scored higher on the combined scale, the analytic rating scale, and the non-verbal and coherence subscales. A combined analytic-holistic rating scale increased score reliability and was sensitive to level of training. Given increased validity evidence, OSCE developers should consider combining analytic and holistic scales when assessing communication skills. Copyright © 2017 Elsevier B.V. All rights reserved.
Analytical Model for Mean Flow and Fluxes of Momentum and Energy in Very Large Wind Farms
NASA Astrophysics Data System (ADS)
Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando
2018-01-01
As wind-turbine arrays continue to be installed and the array size continues to grow, there is an increasing need to represent very large wind-turbine arrays in numerical weather prediction models, for wind-farm optimization, and for environmental assessment. We propose a simple analytical model for boundary-layer flow in fully-developed wind-turbine arrays, based on the concept of sparsely-obstructed shear flows. In describing the vertical distribution of the mean wind speed and shear stress within wind farms, our model estimates the mean kinetic energy harvested from the atmospheric boundary layer, and determines the partitioning between the wind power captured by the wind turbines and that absorbed by the underlying land or water. A length scale based on the turbine geometry, spacing, and performance characteristics, is able to estimate the asymptotic limit for the fully-developed flow through wind-turbine arrays, and thereby determine if the wind-farm flow is fully developed for very large turbine arrays. Our model is validated using data collected in controlled wind-tunnel experiments, and its usefulness for the prediction of wind-farm performance and optimization of turbine-array spacing are described. Our model may also be useful for assessing the extent to which the extraction of wind power affects the land-atmosphere coupling or air-water exchange of momentum, with implications for the transport of heat, moisture, trace gases such as carbon dioxide, methane, and nitrous oxide, and ecologically important oxygen.
Scientific white paper on concentration-QTc modeling.
Garnett, Christine; Bonate, Peter L; Dang, Qianyu; Ferber, Georg; Huang, Dalong; Liu, Jiang; Mehrotra, Devan; Riley, Steve; Sager, Philip; Tornoe, Christoffer; Wang, Yaning
2018-06-01
The International Council for Harmonisation revised the E14 guideline through the questions and answers process to allow concentration-QTc (C-QTc) modeling to be used as the primary analysis for assessing the QTc interval prolongation risk of new drugs. A well-designed and conducted QTc assessment based on C-QTc modeling in early phase 1 studies can be an alternative approach to a thorough QT study for some drugs to reliably exclude clinically relevant QTc effects. This white paper provides recommendations on how to plan and conduct a definitive QTc assessment of a drug using C-QTc modeling in early phase clinical pharmacology and thorough QT studies. Topics included are: important study design features in a phase 1 study; modeling objectives and approach; exploratory plots; the pre-specified linear mixed effects model; general principles for model development and evaluation; and expectations for modeling analysis plans and reports. The recommendations are based on current best modeling practices, scientific literature and personal experiences of the authors. These recommendations are expected to evolve as their implementation during drug development provides additional data and with advances in analytical methodology.
Modelling of light pollution in suburban areas using remotely sensed imagery and GIS.
Chalkias, C; Petrakis, M; Psiloglou, B; Lianou, M
2006-04-01
This paper describes a methodology for modelling light pollution using geographical information systems (GIS) and remote sensing (RS) technology. The proposed approach attempts to address the issue of environmental assessment in sensitive suburban areas. The modern way of life in developing countries is conductive to environmental degradation in urban and suburban areas. One specific parameter for this degradation is light pollution due to intense artificial night lighting. This paper aims to assess this parameter for the Athens metropolitan area, using modern analytical and data capturing technologies. For this purpose, night-time satellite images and analogue maps have been used in order to create the spatial database of the GIS for the study area. Using GIS advanced analytical functionality, visibility analysis was implemented. The outputs for this analysis are a series of maps reflecting direct and indirect light pollution around the city of Athens. Direct light pollution corresponds to optical contact with artificial night light sources, while indirect light pollution corresponds to optical contact with the sky glow above the city. Additionally, the assessment of light pollution in different periods allows for dynamic evaluation of the phenomenon. The case study demonstrates high levels of light pollution in Athens suburban areas and its increase over the last decade.
Crovelli, R.A.; Balay, R.H.
1991-01-01
A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.
Big Data and Predictive Analytics: Applications in the Care of Children.
Suresh, Srinivasan
2016-04-01
Emerging changes in the United States' healthcare delivery model have led to renewed interest in data-driven methods for managing quality of care. Analytics (Data plus Information) plays a key role in predictive risk assessment, clinical decision support, and various patient throughput measures. This article reviews the application of a pediatric risk score, which is integrated into our hospital's electronic medical record, and provides an early warning sign for clinical deterioration. Dashboards that are a part of disease management systems, are a vital tool in peer benchmarking, and can help in reducing unnecessary variations in care. Copyright © 2016 Elsevier Inc. All rights reserved.
Computer-aided controllability assessment of generic manned Space Station concepts
NASA Technical Reports Server (NTRS)
Ferebee, M. J.; Deryder, L. J.; Heck, M. L.
1984-01-01
NASA's Concept Development Group assessment methodology for the on-orbit rigid body controllability characteristics of each generic configuration proposed for the manned space station is presented; the preliminary results obtained represent the first step in the analysis of these eight configurations. Analytical computer models of each configuration were developed by means of the Interactive Design Evaluation of Advanced Spacecraft CAD system, which created three-dimensional geometry models of each configuration to establish dimensional requirements for module connectivity, payload accommodation, and Space Shuttle berthing; mass, center-of-gravity, inertia, and aerodynamic drag areas were then derived. Attention was also given to the preferred flight attitude of each station concept.
NASA Astrophysics Data System (ADS)
Caneses, Juan Francisco; Blackwell, Boyd; Plasma Research Laboratory Team
2013-10-01
In this work we provide an analytical model that allows one to quantitatively assess the RF compensation performance and suitability of the double probe technique for use in RF generated plasma. The model is based in the theory of the self-bias effect as described in Braithwaite's work, which we extend to include the time resolved behavior of floating probes. We provide experimental verification for this model and show that the theory of transient RF self-bias probes and harmonic current detection probes are limiting cases of this extended model. Furthermore, the model shows that the RF compensation is solely dependent on the sheath impedance, the probe's stray capacitance to ground and RF frequency. In addition, we use these results to implement a double probe system for use in high density helicon plasma where heat loads could potentially damage the intricate components in an RF compensating circuit. Finally we use this model to (1) recommend ways to extend the operational regime of double probes where the plasma conditions would render them unsuitable and to (2) comment on the use of this model to aid design of RF compensated Langmuir probes.
Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network
NASA Astrophysics Data System (ADS)
Fallahpour, R.; Chakouvari, S.; Askari, H.
2015-03-01
In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.
NASA Astrophysics Data System (ADS)
Pereira, A. S. N.; de Streel, G.; Planes, N.; Haond, M.; Giacomini, R.; Flandre, D.; Kilchytska, V.
2017-02-01
The Drain Induced Barrier Lowering (DIBL) behavior in Ultra-Thin Body and Buried oxide (UTBB) transistors is investigated in details in the temperature range up to 150 °C, for the first time to the best of our knowledge. The analysis is based on experimental data, physical device simulation, compact model (SPICE) simulation and previously published models. Contrary to MASTAR prediction, experiments reveal DIBL increase with temperature. Physical device simulations of different thin-film fully-depleted (FD) devices outline the generality of such behavior. SPICE simulations, with UTSOI DK2.4 model, only partially adhere to experimental trends. Several analytic models available in the literature are assessed for DIBL vs. temperature prediction. Although being the closest to experiments, Fasarakis' model overestimates DIBL(T) dependence for shortest devices and underestimates it for upsized gate lengths frequently used in ultra-low-voltage (ULV) applications. This model is improved in our work, by introducing a temperature-dependent inversion charge at threshold. The improved model shows very good agreement with experimental data, with high gain in precision for the gate lengths under test.
NASA Technical Reports Server (NTRS)
Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.
1975-01-01
The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Reaction wheel low-speed compensation using a dither signal
NASA Astrophysics Data System (ADS)
Stetson, John B., Jr.
1993-08-01
A method for improving low-speed reaction wheel performance on a three-axis controlled spacecraft is presented. The method combines a constant amplitude offset with an unbiased, oscillating dither to harmonically linearize rolling solid friction dynamics. The complete, nonlinear rolling solid friction dynamics using an analytic modification to the experimentally verified Dahl solid friction model were analyzed using the dual-input describing function method to assess the benefits of dither compensation. The modified analytic solid friction model was experimentally verified with a small dc servomotor actuated reaction wheel assembly. Using dither compensation abrupt static friction disturbances are eliminated and near linear behavior through zero rate can be achieved. Simulated vehicle response to a wheel rate reversal shows that when the dither and offset compensation is used, elastic modes are not significantly excited, and the uncompensated attitude error reduces by 34:1.
The generation of criteria for selecting analytical tools for landscape management
Marilyn Duffey-Armstrong
1979-01-01
This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...
Groundwater Vulnerability Assessment of the Pingtung Plain in Southern Taiwan.
Liang, Ching-Ping; Jang, Cheng-Shin; Liang, Cheng-Wei; Chen, Jui-Sheng
2016-11-23
In the Pingtung Plain of southern Taiwan, elevated levels of NO₃ - -N in groundwater have been reported. Therefore, efforts for assessing groundwater vulnerability are required as part of the critical steps to prevent and control groundwater pollution. This study makes a groundwater vulnerability assessment for the Pingtung Plain using an improved overlay and index-based DRASTIC model. The improvement of the DRASTIC model is achieved by reassigning the weighting coefficients of the factors in this model with the help of a discriminant analysis statistical method. The analytical results obtained from the improved DRASTIC model provide a reliable prediction for use in groundwater vulnerability assessment to nitrate pollution and can correctly identify the groundwater protection zones in the Pingtung Plain. Moreover, the results of the sensitivity analysis conducted for the seven parameters in the improved DRASTIC model demonstrate that the aquifer media (A) is the most sensitive factor when the nitrate-N concentration is below 2.5 mg/L. For the cases where the nitrate-N concentration is above 2.5 mg/L, the aquifer media (A) and net recharge (R) are the two most important factors.
NASA Astrophysics Data System (ADS)
de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.
2010-10-01
In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.
Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie
2017-01-01
Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.
Wu, Xiaoyu; Li, Bin; Ma, Chuanming
2018-05-01
This study assesses vulnerability of groundwater to pollution in Beihai City, China, as a support of groundwater resource protection. The assessment result not only objectively reflects potential possibility of groundwater to contamination but also provides scientific basis for the planning and utilization of groundwater resources. This study optimizes the parameters consisting of natural factors and human factors upon the DRASTIC model and modifies the ratings of these parameters, based on the local environmental conditions for the study area. And a weight of each parameter is assigned by the analytic hierarchy process (AHP) to reduce the subjectivity of humans to vulnerability assessment. The resulting scientific ratings and weights of modified DRASTIC model (AHP-DRASTLE model) contribute to obtain the more realistic assessment of vulnerability of groundwater to contaminant. The comparison analysis validates the accuracy and rationality of the AHP-DRASTLE model and shows it suits the particularity of the study area. The new assessment method (AHP-DRASTLE model) can provide a guide for other scholars to assess the vulnerability of groundwater to contamination. The final vulnerability map for the AHP-DRASTLE model shows four classes: highest (2%), high (29%), low (55%), and lowest (14%). The vulnerability map serves as a guide for decision makers on groundwater resource protection and land use planning at the regional scale and that it is adapted to a specific area.
A genetic algorithm-based job scheduling model for big data analytics.
Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei
Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1991-01-01
Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter
NASA Technical Reports Server (NTRS)
Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe
1996-01-01
Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.
Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students.
Yune, So Jung; Lee, Sang Yeoup; Im, Sun Ju; Kam, Bee Sung; Baek, Sun Yong
2018-06-05
Task-specific checklists, holistic rubrics, and analytic rubrics are often used for performance assessments. We examined what factors evaluators consider important in holistic scoring of clinical performance assessment, and compared the usefulness of applying holistic and analytic rubrics respectively, and analytic rubrics in addition to task-specific checklists based on traditional standards. We compared the usefulness of a holistic rubric versus an analytic rubric in effectively measuring the clinical skill performances of 126 third-year medical students who participated in a clinical performance assessment conducted by Pusan National University School of Medicine. We conducted a questionnaire survey of 37 evaluators who used all three evaluation methods-holistic rubric, analytic rubric, and task-specific checklist-for each student. The relationship between the scores on the three evaluation methods was analyzed using Pearson's correlation. Inter-rater agreement was analyzed by Kappa index. The effect of holistic and analytic rubric scores on the task-specific checklist score was analyzed using multiple regression analysis. Evaluators perceived accuracy and proficiency to be major factors in objective structured clinical examinations evaluation, and history taking and physical examination to be major factors in clinical performance examinations evaluation. Holistic rubric scores were highly related to the scores of the task-specific checklist and analytic rubric. Relatively low agreement was found in clinical performance examinations compared to objective structured clinical examinations. Meanwhile, the holistic and analytic rubric scores explained 59.1% of the task-specific checklist score in objective structured clinical examinations and 51.6% in clinical performance examinations. The results show the usefulness of holistic and analytic rubrics in clinical performance assessment, which can be used in conjunction with task-specific checklists for more efficient evaluation.
Numerical Assessment of Rockbursting.
1987-05-27
static equilibrium, nonlinear elasticity, strain-softening • material , unstable propagation of pre-existing cracks , and finally - surface...structure of LINOS, which is common to most of the large finite element codes, the library of element and material subroutines can be easily expanded... material model subroutines , are tested by comparing finite element results with analytical or numerical results derived for hypo-elastic and
ERIC Educational Resources Information Center
Bernard, Robert M.; Abrami, Philip C.; Wade, Anne; Borokhovski, Evgueni; Lou, Yiping
2004-01-01
Simonson, Schlosser and Hanson (1999) argue that a new theory called "equivalency theory" is needed to account for the unique features of the "teleconferencing" (synchronous) model of DE that is prevalent in many North American universities. Based on a comprehensive meta-analysis of the comparative literature of DE (Bernard,…
Numerical convergence improvements for porflow unsaturated flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Greg
2017-08-14
Section 3.6 of SRNL (2016) discusses various PORFLOW code improvements to increase modeling efficiency, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision. This memorandum documents interaction with Analytic & Computational Research, Inc. (http://www.acricfd.com/default.htm) to improve numerical convergence efficiency using PORFLOW version 6.42 for unsaturated flow simulations.
Nondestructive assessment of timber bridges using a vibration-based method
Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw
2005-01-01
This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...
ERIC Educational Resources Information Center
Fleming, Jennifer
2015-01-01
An analytic matrix comprised of multiple media literacy teaching and learning principles is conceptualized to examine a model of news literacy developed by journalism educators at Stony Brook University. The multidimensional analysis indicates that news literacy instructors focus on teaching students how to question and assess the veracity of news…
ERIC Educational Resources Information Center
Alkahtani, Saif F.
2012-01-01
The principal aim of the present study was to better guide the Quranic recitation appraisal practice by presenting an application of Generalizability theory and Many-facet Rasch Measurement Model for assessing the dependability and fit of two suggested rubrics. Recitations of 93 students were rated holistically and analytically by 3 independent…
Assessing habitat connectivity for ground-dwelling animals in an urban environment.
Braaker, S; Moretti, M; Boesch, R; Ghazoul, J; Obrist, M K; Bontadina, F
To ensure viable species populations in fragmented landscapes, individuals must be able to move between suitable habitat patches. Despite the increased interest in biodiversity assessment in urban environments, the ecological relevance of habitat connectivity in highly fragmented landscapes remains largely unknown. The first step to understanding the role of habitat connectivity in urban ecology is the challenging task of assessing connectivity in the complex patchwork of contrasting habitats that is found in cities. We developed a data-based framework, minimizing the use of subjective assumptions, to assess habitat connectivity that consists of the following sequential steps: (1) identification of habitat preference based on empirical habitat-use data; (2) derivation of habitat resistance surfaces evaluating various transformation functions; (3) modeling of different connectivity maps with electrical circuit theory (Circuitscape), a method considering all possible pathways across the landscape simultaneously; and (4) identification of the best connectivity map with information-theoretic model selection. We applied this analytical framework to assess habitat connectivity for the European hedgehog Erinaceus europaeus, a model species for ground-dwelling animals, in the city of Zurich, Switzerland, using GPS track points from 40 individuals. The best model revealed spatially explicit connectivity “pinch points,” as well as multiple habitat connections. Cross-validation indicated the general validity of the selected connectivity model. The results show that both habitat connectivity and habitat quality affect the movement of urban hedgehogs (relative importance of the two variables was 19.2% and 80.8%, respectively), and are thus both relevant for predicting urban animal movements. Our study demonstrates that even in the complex habitat patchwork of cities, habitat connectivity plays a major role for ground-dwelling animal movement. Data-based habitat connectivity maps can thus serve as an important tool for city planners to identify habitat corridors and plan appropriate management and conservation measures for urban animals. The analytical framework we describe to model such connectivity maps is generally applicable to different types of habitat-use data and can be adapted to the movement scale of the focal species. It also allows evaluation of the impact of future landscape changes or management scenarios on habitat connectivity in urban landscapes.
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Analysis of Advanced Rotorcraft Configurations
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2000-01-01
Advanced rotorcraft configurations are being investigated with the objectives of identifying vehicles that are larger, quieter, and faster than current-generation rotorcraft. A large rotorcraft, carrying perhaps 150 passengers, could do much to alleviate airport capacity limitations, and a quiet rotorcraft is essential for community acceptance of the benefits of VTOL operations. A fast, long-range, long-endurance rotorcraft, notably the tilt-rotor configuration, will improve rotorcraft economics through productivity increases. A major part of the investigation of advanced rotorcraft configurations consists of conducting comprehensive analyses of vehicle behavior for the purpose of assessing vehicle potential and feasibility, as well as to establish the analytical models required to support the vehicle development. The analytical work of FY99 included applications to tilt-rotor aircraft. Tilt Rotor Aeroacoustic Model (TRAM) wind tunnel measurements are being compared with calculations performed by using the comprehensive analysis tool (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD 11)). The objective is to establish the wing and wake aerodynamic models that are required for tilt-rotor analysis and design. The TRAM test in the German-Dutch Wind Tunnel (DNW) produced extensive measurements. This is the first test to encompass air loads, performance, and structural load measurements on tilt rotors, as well as acoustic and flow visualization data. The correlation of measurements and calculations includes helicopter-mode operation (performance, air loads, and blade structural loads), hover (performance and air loads), and airplane-mode operation (performance).
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
Assessing Omitted Confounder Bias in Multilevel Mediation Models.
Tofighi, Davood; Kelley, Ken
2016-01-01
To draw valid inference about an indirect effect in a mediation model, there must be no omitted confounders. No omitted confounders means that there are no common causes of hypothesized causal relationships. When the no-omitted-confounder assumption is violated, inference about indirect effects can be severely biased and the results potentially misleading. Despite the increasing attention to address confounder bias in single-level mediation, this topic has received little attention in the growing area of multilevel mediation analysis. A formidable challenge is that the no-omitted-confounder assumption is untestable. To address this challenge, we first analytically examined the biasing effects of potential violations of this critical assumption in a two-level mediation model with random intercepts and slopes, in which all the variables are measured at Level 1. Our analytic results show that omitting a Level 1 confounder can yield misleading results about key quantities of interest, such as Level 1 and Level 2 indirect effects. Second, we proposed a sensitivity analysis technique to assess the extent to which potential violation of the no-omitted-confounder assumption might invalidate or alter the conclusions about the indirect effects observed. We illustrated the methods using an empirical study and provided computer code so that researchers can implement the methods discussed.
Sociocultural Behavior Influence Modelling & Assessment: Current Work and Research Frontiers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, Michael Lewis
A common problem associated with the effort to better assess potential behaviors of various individuals within different countries is the shear difficulty in comprehending the dynamic nature of populations, particularly over time and considering feedback effects. This paper discusses a theory-based analytical capability designed to enable analysts to better assess the influence of events on individuals interacting within a country or region. These events can include changes in policy, man-made or natural disasters, migration, war, or other changes in environmental/economic conditions. In addition, this paper describes potential extensions of this type of research to enable more timely and accurate assessments.
Analytic and heuristic processes in the detection and resolution of conflict.
Ferreira, Mário B; Mata, André; Donkin, Christopher; Sherman, Steven J; Ihmels, Max
2016-10-01
Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.
Plasma biomarkers of depressive symptoms in older adults.
Arnold, S E; Xie, S X; Leung, Y-Y; Wang, L-S; Kling, M A; Han, X; Kim, E J; Wolk, D A; Bennett, D A; Chen-Plotkin, A; Grossman, M; Hu, W; Lee, V M-Y; Mackin, R Scott; Trojanowski, J Q; Wilson, R S; Shaw, L M
2012-01-03
The pathophysiology of negative affect states in older adults is complex, and a host of central nervous system and peripheral systemic mechanisms may play primary or contributing roles. We conducted an unbiased analysis of 146 plasma analytes in a multiplex biochemical biomarker study in relation to number of depressive symptoms endorsed by 566 participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI) at their baseline and 1-year assessments. Analytes that were most highly associated with depressive symptoms included hepatocyte growth factor, insulin polypeptides, pregnancy-associated plasma protein-A and vascular endothelial growth factor. Separate regression models assessed contributions of past history of psychiatric illness, antidepressant or other psychotropic medicine, apolipoprotein E genotype, body mass index, serum glucose and cerebrospinal fluid (CSF) τ and amyloid levels, and none of these values significantly attenuated the main effects of the candidate analyte levels for depressive symptoms score. Ensemble machine learning with Random Forests found good accuracy (~80%) in classifying groups with and without depressive symptoms. These data begin to identify biochemical biomarkers of depressive symptoms in older adults that may be useful in investigations of pathophysiological mechanisms of depression in aging and neurodegenerative dementias and as targets of novel treatment approaches.
NASA Astrophysics Data System (ADS)
De Simone, Silvia; Carrera, Jesús; María Gómez Castro, Berta
2016-04-01
Fluid injection into geological formations is required for several engineering operations, e.g. geothermal energy production, hydrocarbon production and storage, CO2 storage, wastewater disposal, etc. Non-isothermal fluid injection causes alterations of the pressure and temperature fields, which affect the mechanical stability of the reservoir. This coupled thermo-hydro-mechanical behavior has become a matter of special interest because of public concern about induced seismicity. The response is complex and its evaluation often requires numerical modeling. Nevertheless, analytical solutions are useful in improving our understanding of interactions, identifying the controlling parameters, testing codes and in providing a rapid assessment of the system response to an alteration. We present an easy-to-use solution to the transient advection-conduction heat transfer problem for parallel and radial flow. The solution is then applied to derive analytical expressions for hydraulic and thermal driven displacements and stresses. The validity is verified by comparison with numerical simulations and yields fairly accurate results. The solution is then used to illustrate some features of the poroelastic and thermoelastic response and, in particular, the sensitivity to the external mechanical constraints and to the reservoir dimension.
NASA Astrophysics Data System (ADS)
Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng
2017-05-01
As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.
Big data analytics for the Future Circular Collider reliability and availability studies
NASA Astrophysics Data System (ADS)
Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter
2017-10-01
Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.
Skinner, John P.; Tuomi, Pam A.; Mellish, Jo-Ann E.
2015-01-01
The Steller sea lion, Eumetopias jubatus, has experienced regionally divergent population trends over recent decades. One potential mechanism for this disparity is that local factors cause reduced health and, therefore, reduced survival of individuals. The use of blood parameters to assess sea lion health may help to identify whether malnutrition, disease and stress are important drivers of current trends, but such assessments require species-specific knowledge of how parameters respond to various health challenges. We used principal components analysis to identify which key blood parameters (principal analytes) best described changes in health for temporarily captive juvenile Steller sea lions in known conditions. Generalized additive mixed models were used to estimate the changes in principal analytes with food intake, time in captivity and acute trauma associated with hot-iron branding and transmitter implant surgery. Of the 17 blood parameters examined, physiological changes for juvenile sea lions were best described using the following six principal analytes: red blood cell counts, white blood cell counts, globulin, platelets, glucose and total bilirubin. The white blood cell counts and total bilirubin declined over time in captivity, whereas globulin increased. Elevated red blood cell counts, white blood cell counts and total bilirubin and reduced globulin values were associated with lower food intake. After branding, white blood cell counts were elevated for the first 30 days, while globulin and platelets were elevated for the first 15 days only. After implant surgery, red blood cell counts and globulin remained elevated for 30 days, while white blood cell counts remained elevated during the first 15 days only. Glucose was unassociated with the factors we studied. These results were used to provide expected ranges for principal analytes at different levels of food intake and in response to the physical challenges of branding and implant surgery. These results provide a more detailed reference for future evaluations of health-related assessments. PMID:27293693
NASA Astrophysics Data System (ADS)
Epackachi, Siamak
The seismic performance of rectangular steel-plate concrete (SC) composite shear walls is assessed for application to buildings and mission-critical infrastructure. The SC walls considered in this study were composed of two steel faceplates and infill concrete. The steel faceplates were connected together and to the infill concrete using tie rods and headed studs, respectively. The research focused on the in-plane behavior of flexure- and flexure-shear-critical SC walls. An experimental program was executed in the NEES laboratory at the University at Buffalo and was followed by numerical and analytical studies. In the experimental program, four large-size specimens were tested under displacement-controlled cyclic loading. The design variables considered in the testing program included wall thickness, reinforcement ratio, and slenderness ratio. The aspect ratio (height-to-length) of the four walls was 1.0. Each SC wall was installed on top of a re-usable foundation block. A bolted baseplate to RC foundation connection was used for all four walls. The walls were identified to be flexure- and flexure-shear critical. The progression of damage in the four walls was identical, namely, cracking and crushing of the infill concrete at the toes of the walls, outward buckling and yielding of the steel faceplates near the base of the wall, and tearing of the faceplates at their junctions with the baseplate. A robust finite element model was developed in LS-DYNA for nonlinear cyclic analysis of the flexure- and flexure-shear-critical SC walls. The DYNA model was validated using the results of the cyclic tests of the four SC walls. The validated and benchmarked models were then used to conduct a parametric study, which investigated the effects of wall aspect ratio, reinforcement ratio, wall thickness, and uniaxial concrete compressive strength on the in-plane response of SC walls. Simplified analytical models, suitable for preliminary analysis and design of SC walls, were developed, validated, and implemented in MATLAB. Analytical models were proposed for monotonic and cyclic simulations of the in-plane response of flexure- and flexure-shear-critical SC wall piers. The model for cyclic analysis was developed by modifying the Ibarra-Krawinler Pinching (IKP) model. The analytical models were verified using the results of the parametric study and validated using the test data.
Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; Plascencia-Mora, Hector; Rodríguez-Morales, Ángel L.; Lu, Jian
2011-01-01
Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode) through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs) and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application. PMID:22164071
NASA Astrophysics Data System (ADS)
D'Ulivo, Alessandro
2016-05-01
A reaction model describing the reactivity of metal and semimetal species with aqueous tetrahydridoborate (THB) has been drawn taking into account the mechanism of chemical vapor generation (CVG) of hydrides, recent evidences on the mechanism of interference and formation of byproducts in arsane generation, and other evidences in the field of the synthesis of nanoparticles and catalytic hydrolysis of THB by metal nanoparticles. The new "non-analytical" reaction model is of more general validity than the previously described "analytical" reaction model for CVG. The non-analytical model is valid for reaction of a single analyte with THB and for conditions approaching those typically encountered in the synthesis of nanoparticles and macroprecipitates. It reduces to the previously proposed analytical model under conditions typically employed in CVG for trace analysis (analyte below the μM level, borane/analyte ≫ 103 mol/mol, no interference). The non-analytical reaction model is not able to explain all the interference effects observed in CVG, which can be achieved only by assuming the interaction among the species of reaction pathways of different analytical substrates. The reunification of CVG, the synthesis of nanoparticles by aqueous THB and the catalytic hydrolysis of THB inside a common frame contribute to rationalization of the complex reactivity of aqueous THB with metal and semimetal species.
Modeling of the Global Water Cycle - Analytical Models
Yongqiang Liu; Roni Avissar
2005-01-01
Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...
NASA Astrophysics Data System (ADS)
Barnsley, Lester C.; Carugo, Dario; Aron, Miles; Stride, Eleanor
2017-03-01
The aim of this study was to characterize the behaviour of superparamagnetic particles in magnetic drug targeting (MDT) schemes. A 3-dimensional mathematical model was developed, based on the analytical derivation of the trajectory of a magnetized particle suspended inside a fluid channel carrying laminar flow and in the vicinity of an external source of magnetic force. Semi-analytical expressions to quantify the proportion of captured particles, and their relative accumulation (concentration) as a function of distance along the wall of the channel were also derived. These were expressed in terms of a non-dimensional ratio of the relevant physical and physiological parameters corresponding to a given MDT protocol. The ability of the analytical model to assess magnetic targeting schemes was tested against numerical simulations of particle trajectories. The semi-analytical expressions were found to provide good first-order approximations for the performance of MDT systems in which the magnetic force is relatively constant over a large spatial range. The numerical model was then used to test the suitability of a range of different designs of permanent magnet assemblies for MDT. The results indicated that magnetic arrays that emit a strong magnetic force that varies rapidly over a confined spatial range are the most suitable for concentrating magnetic particles in a localized region. By comparison, commonly used magnet geometries such as button magnets and linear Halbach arrays result in distributions of accumulated particles that are less efficient for delivery. The trajectories predicted by the numerical model were verified experimentally by acoustically focusing magnetic microbeads flowing in a glass capillary channel, and optically tracking their path past a high field gradient Halbach array.
Xie, Haijian; Yan, Huaxiang; Feng, Shijin; Wang, Qiao; Chen, Peixiong
2016-10-01
One-dimensional mathematical model is developed to investigate the behavior of contaminant transport in landfill composite liner system considering coupled effect of consolidation, diffusion, and degradation. The first- and second-type bottom boundary conditions are used to derive the steady-state and quasi-steady-state analytical solutions. The concentration profiles obtained by the proposed analytical solution are in good agreement with those obtained by the laboratory tests. The bottom concentration and flux of the soil liners can be greatly reduced when the degradation effect and porosity changing are considered. For the case under steady-state, the bottom flux and concentration for the case with t 1/2 =10 years can be 2.8 and 5.5 times lower than those of the case with t 1/2 =100 years, respectively. The bottom concentration and flux of the soil liners can be greatly reduced when the coefficient of volume compressibility decreases. For quasi-steady-state and with t 1/2 = 10 years, the bottom flux and concentration for the case with m v = 0.02/MPa can be 17.4 and 21 times lower than the case with m v = 0.5/MPa. This may be due to the fact that the true fluid velocity induced by consolidation is greater for the case with high coefficient of volume compressibility. The bottom flux for the case with single compacted clay liner (CCL) can be 1.5 times larger than that for the case with GMB/CCL considering diffusion and consolidation for DCM. The proposed analytical model can be used for verification of more complicated numerical models and assessment of the coupled effect of diffusion, consolidation, and degradation on contaminant transport in landfill liner systems.
Automated workflows for modelling chemical fate, kinetics and toxicity.
Sala Benito, J V; Paini, Alicia; Richarz, Andrea-Nicole; Meinl, Thorsten; Berthold, Michael R; Cronin, Mark T D; Worth, Andrew P
2017-12-01
Automation is universal in today's society, from operating equipment such as machinery, in factory processes, to self-parking automobile systems. While these examples show the efficiency and effectiveness of automated mechanical processes, automated procedures that support the chemical risk assessment process are still in their infancy. Future human safety assessments will rely increasingly on the use of automated models, such as physiologically based kinetic (PBK) and dynamic models and the virtual cell based assay (VCBA). These biologically-based models will be coupled with chemistry-based prediction models that also automate the generation of key input parameters such as physicochemical properties. The development of automated software tools is an important step in harmonising and expediting the chemical safety assessment process. In this study, we illustrate how the KNIME Analytics Platform can be used to provide a user-friendly graphical interface for these biokinetic models, such as PBK models and VCBA, which simulates the fate of chemicals in vivo within the body and in vitro test systems respectively. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modelling shoreline evolution in the vicinity of a groyne and a river
NASA Astrophysics Data System (ADS)
Valsamidis, Antonios; Reeve, Dominic E.
2017-01-01
Analytical solutions to the equations governing shoreline evolution are well-known and have value both as pedagogical tools and for conceptual design. Nevertheless, solutions have been restricted to a fairly narrow class of conditions with limited applicability to real-life situations. We present a new analytical solution for a widely encountered situation where a groyne is constructed close to a river to control sediment movement. The solution, which employs Laplace transforms, has the advantage that a solution for time-varying conditions may be constructed from the solution for constant conditions by means of the Heaviside procedure. Solutions are presented for various combinations of wave conditions and sediment supply/removal by the river. An innovation introduced in this work is the capability to provide an analytical assessment of the accretion or erosion caused near the groyne due to its proximity to the river which may act either as a source or a sink of sediment material.
On-line soft sensing in upstream bioprocessing.
Randek, Judit; Mandenius, Carl-Fredrik
2018-02-01
This review provides an overview and a critical discussion of novel possibilities of applying soft sensors for on-line monitoring and control of industrial bioprocesses. Focus is on bio-product formation in the upstream process but also the integration with other parts of the process is addressed. The term soft sensor is used for the combination of analytical hardware data (from sensors, analytical devices, instruments and actuators) with mathematical models that create new real-time information about the process. In particular, the review assesses these possibilities from an industrial perspective, including sensor performance, information value and production economy. The capabilities of existing analytical on-line techniques are scrutinized in view of their usefulness in soft sensor setups and in relation to typical needs in bioprocessing in general. The review concludes with specific recommendations for further development of soft sensors for the monitoring and control of upstream bioprocessing.
Space Station Freedom Data Assessment Study
NASA Technical Reports Server (NTRS)
Johnson, Anngienetta R.; Deskevich, Joseph
1990-01-01
The SSF Data Assessment Study was initiated to identify payload and operations data requirements to be supported in the Space Station era. To initiate the study payload requirements from the projected SSF user community were obtained utilizing an electronic questionnaire. The results of the questionnaire were incorporated in a personal computer compatible database used for mission scheduling and end-to-end communications analyses. This paper discusses data flow paths and associated latencies, communications bottlenecks, resource needs versus availability, payload scheduling 'warning flags' and payload data loading requirements for each major milestone in the Space Station buildup sequence. This paper also presents the statistical and analytical assessments produced using the data base, an experiment scheduling program, and a Space Station unique end-to-end simulation model. The modeling concepts and simulation methodologies presented in this paper provide a foundation for forecasting communication requirements and identifying modeling tools to be used in the SSF Tactical Operations Planning (TOP) process.
ERIC Educational Resources Information Center
OECD Publishing, 2017
2017-01-01
What is important for citizens to know and be able to do? The OECD Programme for International Student Assessment (PISA) seeks to answer that question through the most comprehensive and rigorous international assessment of student knowledge and skills. The PISA 2015 Assessment and Analytical Framework presents the conceptual foundations of the…
Functional Data Analysis for Dynamical System Identification of Behavioral Processes
Trail, Jessica B.; Collins, Linda M.; Rivera, Daniel E.; Li, Runze; Piper, Megan E.; Baker, Timothy B.
2014-01-01
Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time, but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate two innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PMID:24079929
On the Use of the Beta Distribution in Probabilistic Resource Assessments
Olea, R.A.
2011-01-01
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1991-01-01
The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
NASA Technical Reports Server (NTRS)
Massman, William J.
1987-01-01
The semianalytical model outlined in a previous study (Massman, 1987) to describe momentum exchange between the atmosphere and vegetated surfaces is extended to include the exchange of heat. The methods employed are based on one-dimensional turbulent diffusivities, and use analytical solutions to the steady-state diffusion equation. The model is used to assess the influence that the canopy foliage structure and density, the wind profile structure within the canopy, and the shelter factor can have upon the inverse surface Stanton number (kB exp -1), as well as to explore the consequences of introducing a scalar displacement height which can be different from the momentum displacement height. In general, the triangular foliage area density function gives results which agree more closely with observations than that for constant foliage area density. The intended application of this work is for parameterizing the bulk aerodynamic resistances for heat and momentum exchange for use within large-scale models of plant-atmosphere exchanges.
Theoretical and computational analyses of LNG evaporator
NASA Astrophysics Data System (ADS)
Chidambaram, Palani Kumar; Jo, Yang Myung; Kim, Heuy Dong
2017-04-01
Theoretical and numerical analysis on the fluid flow and heat transfer inside a LNG evaporator is conducted in this work. Methane is used instead of LNG as the operating fluid. This is because; methane constitutes over 80% of natural gas. The analytical calculations are performed using simple mass and energy balance equations. The analytical calculations are made to assess the pressure and temperature variations in the steam tube. Multiphase numerical simulations are performed by solving the governing equations (basic flow equations of continuity, momentum and energy equations) in a portion of the evaporator domain consisting of a single steam pipe. The flow equations are solved along with equations of species transport. Multiphase modeling is incorporated using VOF method. Liquid methane is the primary phase. It vaporizes into the secondary phase gaseous methane. Steam is another secondary phase which flows through the heating coils. Turbulence is modeled by a two equation turbulence model. Both the theoretical and numerical predictions are seen to match well with each other. Further parametric studies are planned based on the current research.
NASA Astrophysics Data System (ADS)
Wang, D.; Cui, Y.
2015-12-01
The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
Modeling an internal gear pump
NASA Astrophysics Data System (ADS)
Chen, Zongbin; Xu, Rongwu; He, Lin; Liao, Jian
2018-05-01
Considering the nature and characteristics of construction waste piles, this paper analyzed the factors affecting the stability of the slope of construction waste piles, and established the system of the assessment indexes for the slope failure risks of construction waste piles. Based on the basic principles and methods of fuzzy mathematics, the factor set and the remark set were established. The membership grade of continuous factor indexes is determined using the "ridge row distribution" function, while that for the discrete factor indexes was determined by the Delphi Method. For the weight of factors, the subjective weight was determined by the Analytic Hierarchy Process (AHP) and objective weight by the entropy weight method. And the distance function was introduced to determine the combination coefficient. This paper established a fuzzy comprehensive assessment model of slope failure risks of construction waste piles, and assessed pile slopes in the two dimensions of hazard and vulnerability. The root mean square of the hazard assessment result and vulnerability assessment result was the final assessment result. The paper then used a certain construction waste pile slope as the example for analysis, assessed the risks of the four stages of a landfill, verified the assessment model and analyzed the slope's failure risks and preventive measures against a slide.
Huhn, Carolin; Pyell, Ute
2008-07-11
It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.
Schneider, Christopher; Newhauser, Wayne; Farah, Jad
2015-05-18
Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.
Barbin, Douglas Fernandes; Valous, Nektarios A; Dias, Adriana Passos; Camisa, Jaqueline; Hirooka, Elisa Yoko; Yamashita, Fabio
2015-11-01
There is an increasing interest in the use of polysaccharides and proteins for the production of biodegradable films. Visible and near-infrared (VIS-NIR) spectroscopy is a reliable analytical tool for objective analyses of biological sample attributes. The objective is to investigate the potential of VIS-NIR spectroscopy as a process analytical technology for compositional characterization of biodegradable materials and correlation to their mechanical properties. Biofilms were produced by single-screw extrusion with different combinations of polybutylene adipate-co-terephthalate, whole oat flour, glycerol, magnesium stearate, and citric acid. Spectral data were recorded in the range of 400-2498nm at 2nm intervals. Partial least square regression was used to investigate the correlation between spectral information and mechanical properties. Results show that spectral information is influenced by the major constituent components, as they are clustered according to polybutylene adipate-co-terephthalate content. Results for regression models using the spectral information as predictor of tensile properties achieved satisfactory results, with coefficients of prediction (R(2)C) of 0.83, 0.88 and 0.92 (calibration models) for elongation, tensile strength, and Young's modulus, respectively. Results corroborate the correlation of NIR spectra with tensile properties, showing that NIR spectroscopy has potential as a rapid analytical technology for non-destructive assessment of the mechanical properties of the films. Copyright © 2015 Elsevier B.V. All rights reserved.
Peraman, R.; Bhadraya, K.; Reddy, Y. Padmanabha; Reddy, C. Surayaprakash; Lokesh, T.
2015-01-01
By considering the current regulatory requirement for an analytical method development, a reversed phase high performance liquid chromatographic method for routine analysis of etofenamate in dosage form has been optimized using analytical quality by design approach. Unlike routine approach, the present study was initiated with understanding of quality target product profile, analytical target profile and risk assessment for method variables that affect the method response. A liquid chromatography system equipped with a C18 column (250×4.6 mm, 5 μ), a binary pump and photodiode array detector were used in this work. The experiments were conducted based on plan by central composite design, which could save time, reagents and other resources. Sigma Tech software was used to plan and analyses the experimental observations and obtain quadratic process model. The process model was used for predictive solution for retention time. The predicted data from contour diagram for retention time were verified actually and it satisfied with actual experimental data. The optimized method was achieved at 1.2 ml/min flow rate of using mobile phase composition of methanol and 0.2% triethylamine in water at 85:15, % v/v, pH adjusted to 6.5. The method was validated and verified for targeted method performances, robustness and system suitability during method transfer. PMID:26997704
NASA Astrophysics Data System (ADS)
Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke
2016-05-01
Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Shujiang; Kline, Keith L; Nair, S. Surendran
A global energy crop productivity model that provides geospatially explicit quantitative details on biomass potential and factors affecting sustainability would be useful, but does not exist now. This study describes a modeling platform capable of meeting many challenges associated with global-scale agro-ecosystem modeling. We designed an analytical framework for bioenergy crops consisting of six major components: (i) standardized natural resources datasets, (ii) global field-trial data and crop management practices, (iii) simulation units and management scenarios, (iv) model calibration and validation, (v) high-performance computing (HPC) simulation, and (vi) simulation output processing and analysis. The HPC-Environmental Policy Integrated Climate (HPC-EPIC) model simulatedmore » a perennial bioenergy crop, switchgrass (Panicum virgatum L.), estimating feedstock production potentials and effects across the globe. This modeling platform can assess soil C sequestration, net greenhouse gas (GHG) emissions, nonpoint source pollution (e.g., nutrient and pesticide loss), and energy exchange with the atmosphere. It can be expanded to include additional bioenergy crops (e.g., miscanthus, energy cane, and agave) and food crops under different management scenarios. The platform and switchgrass field-trial dataset are available to support global analysis of biomass feedstock production potential and corresponding metrics of sustainability.« less
Assessment of Galileo modal test results for mathematical model verification
NASA Technical Reports Server (NTRS)
Trubert, M.
1984-01-01
The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.
Comparison of particle tracking algorithms in commercial CFD packages: sedimentation and diffusion.
Robinson, Risa J; Snyder, Pam; Oldham, Michael J
2007-05-01
Computational fluid dynamic modeling software has enabled microdosimetry patterns of inhaled toxins and toxicants to be predicted and visualized, and is being used in inhalation toxicology and risk assessment. These predicted microdosimetry patterns in airway structures are derived from predicted airflow patterns within these airways and particle tracking algorithms used in computational fluid dynamics (CFD) software packages. Although these commercial CFD codes have been tested for accuracy under various conditions, they have not been well tested for respiratory flows in general. Nor has their particle tracking algorithm accuracy been well studied. In this study, three software packages, Fluent Discrete Phase Model (DPM), Fluent Fine Particle Model (FPM), and ANSYS CFX, were evaluated. Sedimentation and diffusion were each isolated in a straight tube geometry and tested for accuracy. A range of flow rates corresponding to adult low activity (minute ventilation = 10 L/min) and to heavy exertion (minute ventilation = 60 L/min) were tested by varying the range of dimensionless diffusion and sedimentation parameters found using the Weibel symmetric 23 generation lung morphology. Numerical results for fully developed parabolic and uniform (slip) profiles were compared respectively, to Pich (1972) and Yu (1977) analytical sedimentation solutions. Schum and Yeh (1980) equations for sedimentation were also compared. Numerical results for diffusional deposition were compared to analytical solutions of Ingham (1975) for parabolic and uniform profiles. Significant differences were found among the various CFD software packages and between numerical and analytical solutions. Therefore, it is prudent to validate CFD predictions against analytical solutions in idealized geometry before tackling the complex geometries of the respiratory tract.
Flow over Canopies with Complex Morphologies
NASA Astrophysics Data System (ADS)
Rubol, S.; Ling, B.; Battiato, I.
2017-12-01
Quantifying and predicting how submerged vegetation affects the velocity profile of riverine systems is crucial in ecohydraulics to properly assess the water quality and ecological functions or rivers. The state of the art includes a plethora of models to study the flow and transport over submerged canopies. However, most of them are validated against data collected in flume experiments with rigid cylinders. With the objective of investigating the capability of a simple analytical solution for vegetated flow to reproduce and predict the velocity profile of complex shaped flexible canopies, we use the flow model proposed by Battiato and Rubol [WRR 2013] as the analytical approximation of the mean velocity profile above and within the canopy layer. This model has the advantages (i) to threat the canopy layer as a porous medium, whose geometrical properties are associated with macroscopic effective permeability and (ii) to use input parameters that can be estimated by remote sensing techniques, such us the heights of the water level and the canopy. The analytical expressions for the average velocity profile and the discharge are tested against data collected across a wide range of canopy morphologies commonly encountered in riverine systems, such as grasses, woody vegetation and bushes. Results indicate good agreement between the analytical expressions and the data for both simple and complex plant geometry shapes. The rescaled low submergence velocities in the canopy layer followed the same scaling found in arrays of rigid cylinders. In addition, for the dataset analyzed, the Darcy friction factor scaled with the inverse of the bulk Reynolds number multiplied by the ratio of the fluid to turbulent viscosity.
An efficient approach for treating composition-dependent diffusion within organic particles
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...
2017-09-07
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
An efficient approach for treating composition-dependent diffusion within organic particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
Valente, Matthew J.; MacKinnon, David P.
2017-01-01
Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example. PMID:28845097
Valente, Matthew J; MacKinnon, David P
2017-01-01
Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example.
Handbook of Analytical Methods for Textile Composites
NASA Technical Reports Server (NTRS)
Cox, Brian N.; Flanagan, Gerry
1997-01-01
The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.
System model the processing of heterogeneous sensory information in robotized complex
NASA Astrophysics Data System (ADS)
Nikolaev, V.; Titov, V.; Syryamkin, V.
2018-05-01
Analyzed the scope and the types of robotic systems consisting of subsystems of the form "a heterogeneous sensors data processing subsystem". On the basis of the Queuing theory model is developed taking into account the unevenness of the intensity of information flow from the sensors to the subsystem of information processing. Analytical solution to assess the relationship of subsystem performance and uneven flows. The research of the obtained solution in the range of parameter values of practical interest.
Econometric model for age- and population-dependent radiation exposures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandquist, G.M.; Slaughter, D.M.; Rogers, V.C.
1991-01-01
The economic impact associated with ionizing radiation exposures in a given human population depends on numerous factors including the individual's mean economic status as a function age, the age distribution of the population, the future life expectancy at each age, and the latency period for the occurrence of radiation-induced health effects. A simple mathematical model has been developed that provides an analytical methodology for estimating the societal econometrics associated with radiation effects are to be assessed and compared for economic evaluation.
Motivations for play in online games.
Yee, Nick
2006-12-01
An empirical model of player motivations in online games provides the foundation to understand and assess how players differ from one another and how motivations of play relate to age, gender, usage patterns, and in-game behaviors. In the current study, a factor analytic approach was used to create an empirical model of player motivations. The analysis revealed 10 motivation subcomponents that grouped into three overarching components (achievement, social, and immersion). Relationships between motivations and demographic variables (age, gender, and usage patterns) are also presented.
Clinical reasoning of junior doctors in emergency medicine: a grounded theory study.
Adams, E; Goyder, C; Heneghan, C; Brand, L; Ajjawi, R
2017-02-01
Emergency medicine (EM) has a high case turnover and acuity making it a demanding clinical reasoning domain especially for junior doctors who lack experience. We aimed to better understand their clinical reasoning using dual cognition as a guiding theory. EM junior doctors were recruited from six hospitals in the south of England to participate in semi-structured interviews (n=20) and focus groups (n=17) based on recall of two recent cases. Transcripts were analysed using a grounded theory approach to identify themes and to develop a model of junior doctors' clinical reasoning in EM. Within cases, clinical reasoning occurred in three phases. In phase 1 (case framing), initial case cues and first impressions were predominantly intuitive, but checked by analytical thought and determined the urgency of clinical assessment. In phase 2 (evolving reasoning), non-analytical single cue and pattern recognitions were common which were subsequently validated by specific analytical strategies such as use of red flags. In phase 3 (ongoing uncertainty) analytical self-monitoring and reassurance strategies were used to precipitate a decision regarding discharge. We found a constant dialectic between intuitive and analytical cognition throughout the reasoning process. Our model of clinical reasoning by EM junior doctors illustrates the specific contextual manifestations of the dual cognition theory. Distinct diagnostic strategies are identified and together these give EM learners and educators a framework and vocabulary for discussion and learning about clinical reasoning. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.
2010-05-23
The increasing asymmetric nature of threats to the security, health and sustainable growth of our society requires that anticipatory reasoning become an everyday activity. Currently, the use of anticipatory reasoning is hindered by the lack of systematic methods for combining knowledge- and evidence-based models, integrating modeling algorithms, and assessing model validity, accuracy and utility. The workshop addresses these gaps with the intent of fostering the creation of a community of interest on model integration and evaluation that may serve as an aggregation point for existing efforts and a launch pad for new approaches.
Fourier decomposition of segmented magnets with radial magnetization in surface-mounted PM machines
NASA Astrophysics Data System (ADS)
Tiang, Tow Leong; Ishak, Dahaman; Lim, Chee Peng
2017-11-01
This paper presents a generic field model of radial magnetization (RM) pattern produced by multiple segmented magnets per rotor pole in surface-mounted permanent magnet (PM) machines. The magnetization vectors from either odd- or even-number of magnet blocks per pole are described. Fourier decomposition is first employed to derive the field model, and later integrated with the exact 2D analytical subdomain method to predict the magnetic field distributions and other motor global quantities. For the assessment purpose, a 12-slot/8-pole surface-mounted PM motor with two segmented magnets per pole is investigated by using the proposed field model. The electromagnetic performances of the PM machines are intensively predicted by the proposed magnet field model which include the magnetic field distributions, airgap flux density, phase back-EMF, cogging torque, and output torque during either open-circuit or on-load operating conditions. The analytical results are evaluated and compared with those obtained from both 2D and 3D finite element analyses (FEA) where an excellent agreement has been achieved.
Semianalytical solutions for transport in aquifer and fractured clay matrix system
NASA Astrophysics Data System (ADS)
Huang, Junqi; Goltz, Mark N.
2015-09-01
A three-dimensional mathematical model that describes transport of contaminant in a horizontal aquifer with simultaneous diffusion into a fractured clay formation is proposed. A group of semianalytical solutions is derived based on specific initial and boundary conditions as well as various source functions. The analytical model solutions are evaluated by numerical Laplace inverse transformation and analytical Fourier inverse transformation. The model solutions can be used to study the fate and transport in a three-dimensional spatial domain in which a nonaqueous phase liquid exists as a pool atop a fractured low-permeability clay layer. The nonaqueous phase liquid gradually dissolves into the groundwater flowing past the pool, while simultaneously diffusing into the fractured clay formation below the aquifer. Mass transfer of the contaminant into the clay formation is demonstrated to be significantly enhanced by the existence of the fractures, even though the volume of fractures is relatively small compared to the volume of the clay matrix. The model solution is a useful tool in assessing contaminant attenuation processes in a confined aquifer underlain by a fractured clay formation.
ERIC Educational Resources Information Center
Gan, Zhengdong
2012-01-01
This study, which is part of a large-scale study of using objective measures to validate assessment rating scales and assessment tasks in a high-profile school-based assessment initiative in Hong Kong, examined how grammatical complexity measures relate to task type and analytic evaluations of students' speaking proficiency in a classroom-based…
Probability theory versus simulation of petroleum potential in play analysis
Crovelli, R.A.
1987-01-01
An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An objective was to replace an existing Monte Carlo simulation method in order to increase the efficiency of the appraisal process. Underlying the two methods is a single geologic model which considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The results of the model are resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and a closed form solution of all means and standard deviations, along with the probabilities of occurrence. ?? 1987 J.C. Baltzer A.G., Scientific Publishing Company.
Riemannian geometry of Hamiltonian chaos: hints for a general theory.
Cerruti-Sola, Monica; Ciraolo, Guido; Franzosi, Roberto; Pettini, Marco
2008-10-01
We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.
Evaluation of a Singular Value Decomposition Approach for Impact Dynamic Data Correlation
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Lyle, Karen H.; Lessard, Wendy B.
2003-01-01
Impact dynamic tests are used in the automobile and aircraft industries to assess survivability of occupants during crash, to assert adequacy of the design, and to gain federal certification. Although there is no substitute for experimental tests, analytical models are often developed and used to study alternate test conditions, to conduct trade-off studies, and to improve designs. To validate results from analytical predictions, test and analysis results must be compared to determine the model adequacy. The mathematical approach evaluated in this paper decomposes observed time responses into dominant deformation shapes and their corresponding contribution to the measured response. To correlate results, orthogonality of test and analysis shapes is used as a criterion. Data from an impact test of a composite fuselage is used and compared to finite element predictions. In this example, the impact response was decomposed into multiple shapes but only two dominant shapes explained over 85% of the measured response
Analytic network process model for sustainable lean and green manufacturing performance indicator
NASA Astrophysics Data System (ADS)
Aminuddin, Adam Shariff Adli; Nawawi, Mohd Kamal Mohd; Mohamed, Nik Mohd Zuki Nik
2014-09-01
Sustainable manufacturing is regarded as the most complex manufacturing paradigm to date as it holds the widest scope of requirements. In addition, its three major pillars of economic, environment and society though distinct, have some overlapping among each of its elements. Even though the concept of sustainability is not new, the development of the performance indicator still needs a lot of improvement due to its multifaceted nature, which requires integrated approach to solve the problem. This paper proposed the best combination of criteria en route a robust sustainable manufacturing performance indicator formation via Analytic Network Process (ANP). The integrated lean, green and sustainable ANP model can be used to comprehend the complex decision system of the sustainability assessment. The finding shows that green manufacturing is more sustainable than lean manufacturing. It also illustrates that procurement practice is the most important criteria in the sustainable manufacturing performance indicator.
The case for visual analytics of arsenic concentrations in foods.
Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R
2010-05-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.
The Case for Visual Analytics of Arsenic Concentrations in Foods
Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.
2010-01-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005
Schottky-contact plasmonic rectenna for biosensing
NASA Astrophysics Data System (ADS)
Alavirad, Mohammad; Siadat Mousavi, Saba; Roy, Langis; Berini, Pierre
2013-10-01
We propose a plasmonic gold nanodipole array on silicon, forming a Schottky contact thereon, and covered by water. The behavior of this array under normal excitation has been extensively investigated. Trends have been found and confirmed by identification of the mode propagating in nanodipoles and its properties. This device can be used to detect infrared radiation below the bandgap energy of the substrate via internal photoelectric effect (IPE). Also we estimate its responsivity and detection limit. Finally, we assess the potential of the structure for bulk and surface (bio) chemical sensing. Based on modal results an analytical model has been proposed to estimate the sensitivity of the device. Results show a good agreement between numerical and analytical interpretations.
Analytical method for thermal stress analysis of plasma facing materials
NASA Astrophysics Data System (ADS)
You, J. H.; Bolt, H.
2001-10-01
The thermo-mechanical response of plasma facing materials (PFMs) to heat loads from the fusion plasma is one of the crucial issues in fusion technology. In this work, a fully analytical description of the thermal stress distribution in armour tiles of plasma facing components is presented which is expected to occur under typical high heat flux (HHF) loads. The method of stress superposition is applied considering the temperature gradient and thermal expansion mismatch. Several combinations of PFMs and heat sink metals are analysed and compared. In the framework of the present theoretical model, plastic flow and the effect of residual stress can be quantitatively assessed. Possible failure features are discussed.
An Assessment of the State-of-the-Art in Multidisciplinary Aeromechanical Analyses
2008-01-01
monolithic formulations. In summary, for aerospace structures, partitioned formulations provide fundamental advantages over fully coupled ones, in addition...important frequencies of local analysis directly to global analysis using detailed modeling. Performed ju- diciously, based on a fundamental understanding of...in 2000 has com- prehensively described the problem, and reviewed the status of fundamental understanding, experimental data, and analytical
ERIC Educational Resources Information Center
Fujimoto, Kayo; Unger, Jennifer B.; Valente, Thomas W.
2012-01-01
Using a network analytic framework, this study introduces a new method to measure peer influence based on adolescents' affiliations or 2-mode social network data. Exposure based on affiliations is referred to as the "affiliation exposure model." This study demonstrates the methodology using data on young adolescent smoking being influenced by…
Nondestructive assessment of single-span timber bridges using a vibration- based method
Xiping Wang; James P. Wacker; Angus M. Morison; John W. Forsman; John R. Erickson; Robert J. Ross
2005-01-01
This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...
ERIC Educational Resources Information Center
Micceri, Theodore; Brigman, Leellen; Spatig, Robert
2009-01-01
An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…
ERIC Educational Resources Information Center
Zilberberg, Anna; Finney, Sara J.; Marsh, Kimberly R.; Anderson, Robin D.
2014-01-01
Given worldwide prevalence of low-stakes testing for monitoring educational quality and students' progress through school (e.g., Trends in International Mathematics and Science Study, Program for International Student Assessment), interpretability of resulting test scores is of global concern. The nonconsequential nature of low-stakes tests…
ERIC Educational Resources Information Center
Caplan, Joel M.; Kennedy, Leslie W.; Piza, Eric L.
2013-01-01
Violent crime incidents occurring in Irvington, New Jersey, in 2007 and 2008 are used to assess the joint analytical capabilities of point pattern analysis, hotspot mapping, near-repeat analysis, and risk terrain modeling. One approach to crime analysis suggests that the best way to predict future crime occurrence is to use past behavior, such as…
John Hof; Curtis Flather; Tony Baltic; Rudy King
2006-01-01
The 2005 Forest and Rangeland Condition Indicator Model is a set of classification trees for forest and rangeland condition indicators at the national scale. This report documents the development of the database and the nonparametric statistical estimation for this analytical structure, with emphasis on three special characteristics of condition indicator production...
ERIC Educational Resources Information Center
Grissmer, David W., Ed.; Ross, J. Michael, Ed.
In November 1998 a group of researchers and scholars gathered to explore methodological issues related to the measurement of student achievement, with a more specific focus on the sharing of perspectives on the black-white test score gap. Papers from this conference are: (1) "Introduction: Toward Heuristic Models of Student Outcomes and More…
Hanford internal dosimetry program manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, E.H.; Sula, M.J.; Bihl, D.E.
1989-10-01
This document describes the Hanford Internal Dosimetry program. Program Services include administrating the bioassay monitoring program, evaluating and documenting assessments of internal exposure and dose, ensuring that analytical laboratories conform to requirements, selecting and applying appropriate models and procedures for evaluating internal radionuclide deposition and the resulting dose, and technically guiding and supporting Hanford contractors in matters regarding internal dosimetry. 13 refs., 16 figs., 42 tabs.
Does it make sense to modify tropical cyclones? A decision-analytic assessment.
Klima, Kelly; Morgan, M Granger; Grossmann, Iris; Emanuel, Kerry
2011-05-15
Recent dramatic increases in damages caused by tropical cyclones (TCs) and improved understanding of TC physics have led DHS to fund research on intentional hurricane modification. We present a decision analytic assessment of whether it is potentially cost-effective to attempt to lower the wind speed of TCs approaching South Florida by reducing sea surface temperatures with wind-wave pumps. Using historical data on hurricanes approaching South Florida, we develop prior probabilities of how storms might evolve. The effects of modification are estimated using a modern TC model. The FEMA HAZUS-MH MR3 damage model and census data on the value of property at risk are used to estimate expected economic losses. We compare wind damages after storm modification with damages after implementing hardening strategies protecting buildings. We find that if it were feasible and properly implemented, modification could reduce net losses from an intense storm more than hardening structures. However, hardening provides "fail safe" protection for average storms that might not be achieved if the only option were modification. The effect of natural variability is larger than that of either strategy. Damage from storm surge is modest in the scenario studied but might be abated by modification.
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M
2016-05-01
Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.
A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.
1994-01-01
A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.
NASA Technical Reports Server (NTRS)
Simonson, M. R.; Smith, E. G.; Uhl, W. R.
1974-01-01
Analytical and experimental studies were performed to define the flowfield of annular jets, with and, without swirling flow. The analytical model treated configurations with variations of flow angularities, radius ratio, and swirl distributions. Swirl distributions characteristic of stator vanes and rotor blade rows, where the total pressure and swirl distributions are related were incorporated in the mathematical model. The experimental studies included tests of eleven nozzle models, both with and, without swirling exhaust flow. Flowfield surveys were obtained and used for comparison with the analytical model. This comparison of experimental and analytical studies served as the basis for evaluation of several empirical constants as required for application of the analysis to the general flow configuration. The analytical model developed during these studies is applicable to the evaluation of the flowfield and overall performance of the exhaust of statorless lift fan systems that contain various levels of exhaust swirl.
Analytical Modeling of Groundwater Seepages to St. Lucie Estuary
NASA Astrophysics Data System (ADS)
Lee, J.; Yeh, G.; Hu, G.
2008-12-01
In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of the analytical models presented here can fit the data at this site. To give better estimates at all sites numerical modeling that couple river hydraulics and groundwater flow involving less simplifications of and assumptions for the system may have to be adapted.
New analytical solutions to the two-phase water faucet problem
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-06-17
Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less
Li, Yuanyuan; Xie, Yanming; Fu, Yingkun
2011-10-01
Currently massive researches have been launched about the safety, efficiency and economy of post-marketing Chinese patent medicine (CPM) proprietary Chinese medicine, but it was lack of a comprehensive interpretation. Establishing the risk evaluation index system and risk assessment model of CPM is the key to solve drug safety problems and protect people's health. The clinical risk factors of CPM exist similarities with the Western medicine, can draw lessons from foreign experience, but also have itself multi-factor multivariate multi-level complex features. Drug safety risk assessment for the uncertainty and complexity, using analytic hierarchy process (AHP) to empower the index weights, AHP-based fuzzy neural network to build post-marketing CPM risk evaluation index system and risk assessment model and constantly improving the application of traditional Chinese medicine characteristic is accord with the road and feasible beneficial exploration.
Guimarães, Geovani Pereira; Santos, Ravely Lucena; Júnior, Fernando José de Lima Ramos; da Silva, Karla Monik Alves; de Souza, Fabio Santos
2016-01-01
Momordica charantia is a species cultivated throughout the world and widely used in folk medicine, and its medicinal benefits are well documented, especially its pharmacological properties, including antimicrobial activities. Analytical methods have been used to aid in the characterization of compounds derived from plant drug extracts and their products. This paper developed a methodological model to evaluate the integrity of the vegetable drug M. charantia in different particle sizes, using different analytical methods. M. charantia was collected in the semiarid region of Paraíba, Brazil. The herbal medicine raw material derived from the leaves and fruits in different particle sizes was analyzed using thermoanalytical techniques as thermogravimetry (TG) and differential thermal analysis (DTA), pyrolysis coupled to gas chromatography/mass spectrometry (PYR-GC/MS), and nuclear magnetic resonance (1H NMR), in addition to the determination of antimicrobial activity. The different particle surface area among the samples was differentiated by the techniques. DTA and TG were used for assessing thermal and kinetic parameters and PYR-GC/MS was used for degradation products chromatographic identification through the pyrograms. The infusions obtained from the fruit and leaves of Momordica charantia presented antimicrobial activity. PMID:27579215
NASA Technical Reports Server (NTRS)
Smith, John W.; Montgomery, Terry
1996-01-01
During rapid rolling maneuvers, the F-16 XL aircraft exhibits a 2.5 Hz lightly damped roll oscillation, perceived and described as 'roll ratcheting.' This phenomenon is common with fly-by-wire control systems, particularly when primary control is derived through a pedestal-mounted side-arm controller. Analytical studies have been conducted to model the nature of the integrated control characteristics. The analytical results complement the flight observations. A three-degree-of-freedom linearized set of aerodynamic matrices was assembled to simulate the aircraft plant. The lateral-directional control system was modeled as a linear system. A combination of two second-order transfer functions was derived to couple the lateral acceleration feed through effect of the operator's arm and controller to the roll stick force input. From the combined systems, open-loop frequency responses and a time history were derived, describing and predicting an analogous in-flight situation. This report describes the primary control, aircraft angular rate, and position time responses of the F-16 XL-2 aircraft during subsonic and high-dynamic-pressure rolling maneuvers. The analytical description of the pilot's arm and controller can be applied to other aircraft or simulations to assess roll ratcheting susceptibility.
REDUCING AMBIGUITY IN THE FUNCTIONAL ASSESSMENT OF PROBLEM BEHAVIOR
Rooker, Griffin W.; DeLeon, Iser G.; Borrero, Carrie S. W.; Frank-Crawford, Michelle A.; Roscoe, Eileen M.
2015-01-01
Severe problem behavior (e.g., self-injury and aggression) remains among the most serious challenges for the habilitation of persons with intellectual disabilities and is a significant obstacle to community integration. The current standard of behavior analytic treatment for problem behavior in this population consists of a functional assessment and treatment model. Within that model, the first step is to assess the behavior–environment relations that give rise to and maintain problem behavior, a functional behavioral assessment. Conventional methods of assessing behavioral function include indirect, descriptive, and experimental assessments of problem behavior. Clinical investigators have produced a rich literature demonstrating the relative effectiveness for each method, but in clinical practice, each can produce ambiguous or difficult-to-interpret outcomes that may impede treatment development. This paper outlines potential sources of variability in assessment outcomes and then reviews the evidence on strategies for avoiding ambiguous outcomes and/or clarifying initially ambiguous results. The end result for each assessment method is a set of best practice guidelines, given the available evidence, for conducting the initial assessment. PMID:26236145
Critical Factors in Data Governance for Learning Analytics
ERIC Educational Resources Information Center
Elouazizi, Noureddine
2014-01-01
This paper identifies some of the main challenges of data governance modelling in the context of learning analytics for higher education institutions, and discusses the critical factors for designing data governance models for learning analytics. It identifies three fundamental common challenges that cut across any learning analytics data…
Laboratory, Field, and Analytical Procedures for Using ...
Regardless of the remedial technology invoked to address contaminated sediments in the environment, there is a critical need to have tools for assessing the effectiveness of the remedy. In the past, these tools have included chemical and biomonitoring of the water column and sediments, toxicity testing and bioaccumulation studies performed on site sediments, and application of partitioning, transport and fate modeling. All of these tools served as lines of evidence for making informed environmental management decisions at contaminated sediment sites. In the last ten years, a new tool for assessing remedial effectiveness has gained a great deal of attention. Passive sampling offers a tool capable of measuring the freely dissolved concentration (Cfree) of legacy contaminants in water and sediments. In addition to assessing the effectiveness of the remedy, passive sampling can be applied for a variety of other contaminated sediments site purposes involved with performing the preliminary assessment and site inspection, conducting the remedial investigation and feasibility study, preparing the remedial design, and assessing the potential for contaminant bioaccumulation. While there is a distinct need for using passive sampling at contaminated sediments sites and several previous documents and research articles have discussed various aspects of passive sampling, there has not been definitive guidance on the laboratory, field and analytical procedures for using pas
Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements
NASA Astrophysics Data System (ADS)
Bakker, M.
2017-12-01
Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.
Cook, Robert L; Kelso, Natalie E; Brumback, Babette A; Chen, Xinguang
2016-01-01
As persons with HIV are living longer, there is a growing need to investigate factors associated with chronic disease, rate of disease progression and survivorship. Many risk factors for this high-risk population change over time, such as participation in treatment, alcohol consumption and drug abuse. Longitudinal datasets are increasingly available, particularly clinical data that contain multiple observations of health exposures and outcomes over time. Several analytic options are available for assessment of longitudinal data; however, it can be challenging to choose the appropriate analytic method for specific combinations of research questions and types of data. The purpose of this review is to help researchers choose the appropriate methods to analyze longitudinal data, using alcohol consumption as an example of a time-varying exposure variable. When selecting the optimal analytic method, one must consider aspects of exposure (e.g. timing, pattern, and amount) and outcome (fixed or time-varying), while also addressing minimizing bias. In this article, we will describe several analytic approaches for longitudinal data, including developmental trajectory analysis, generalized estimating equations, and mixed effect models. For each analytic strategy, we describe appropriate situations to use the method and provide an example that demonstrates the use of the method. Clinical data related to alcohol consumption and HIV are used to illustrate these methods.
Durning, Steven J; Costanzo, Michelle E; Beckman, Thomas J; Artino, Anthony R; Roy, Michael J; van der Vleuten, Cees; Holmboe, Eric S; Lipner, Rebecca S; Schuwirth, Lambert
2016-06-01
Diagnostic reasoning involves the thinking steps up to and including arrival at a diagnosis. Dual process theory posits that a physician's thinking is based on both non-analytic or fast, subconscious thinking and analytic thinking that is slower, more conscious, effortful and characterized by comparing and contrasting alternatives. Expertise in clinical reasoning may relate to the two dimensions measured by the diagnostic thinking inventory (DTI): memory structure and flexibility in thinking. Explored the functional magnetic resonance imaging (fMRI) correlates of these two aspects of the DTI: memory structure and flexibility of thinking. Participants answered and reflected upon multiple-choice questions (MCQs) during fMRI. A DTI was completed shortly after the scan. The brain processes associated with the two dimensions of the DTI were correlated with fMRI phases - assessing flexibility in thinking during analytical clinical reasoning, memory structure during non-analytical clinical reasoning and the total DTI during both non-analytical and analytical reasoning in experienced physicians. Each DTI component was associated with distinct functional neuroanatomic activation patterns, particularly in the prefrontal cortex. Our findings support diagnostic thinking conceptual models and indicate mechanisms through which cognitive demands may induce functional adaptation within the prefrontal cortex. This provides additional objective validity evidence for the use of the DTI in medical education and practice settings.
Holistic irrigation water management approach based on stochastic soil water dynamics
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. The area suffers from the water scarcity problem and therefore the trade-off between the level of deficit and economical profit should be assessed. Based on the results, while the maximum net benefit has been obtained for the stress-avoidance (SA) irrigation policy, the highest water profitability, defined by economical net benefit gained from unit irrigation water volume application, has been resulted when only about 60% of water used in the SA policy is applied.
Retrieval-travel-time model for free-fall-flow-rack automated storage and retrieval system
NASA Astrophysics Data System (ADS)
Metahri, Dhiyaeddine; Hachemi, Khalid
2018-03-01
Automated storage and retrieval systems (AS/RSs) are material handling systems that are frequently used in manufacturing and distribution centers. The modelling of the retrieval-travel time of an AS/RS (expected product delivery time) is practically important, because it allows us to evaluate and improve the system throughput. The free-fall-flow-rack AS/RS has emerged as a new technology for drug distribution. This system is a new variation of flow-rack AS/RS that uses an operator or a single machine for storage operations, and uses a combination between the free-fall movement and a transport conveyor for retrieval operations. The main contribution of this paper is to develop an analytical model of the expected retrieval-travel time for the free-fall flow-rack under a dedicated storage assignment policy. The proposed model, which is based on a continuous approach, is compared for accuracy, via simulation, with discrete model. The obtained results show that the maximum deviation between the continuous model and the simulation is less than 5%, which shows the accuracy of our model to estimate the retrieval time. The analytical model is useful to optimise the dimensions of the rack, assess the system throughput, and evaluate different storage policies.
Determining passive cooling limits in CPV using an analytical thermal model
NASA Astrophysics Data System (ADS)
Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard
2013-09-01
We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.
Water flow in fractured rock masses: numerical modeling for tunnel inflow assessment
NASA Astrophysics Data System (ADS)
Gattinoni, P.; Scesi, L.; Terrana, S.
2009-04-01
Water circulation in rocks represents a very important element to solve many problems linked with civil, environmental and mining engineering. In particular, the interaction of tunnelling with groundwater has become a very relevant problem not only due to the need to safeguard water resources from impoverishment and from the pollution risk, but also to guarantee the safety of workers and to assure the efficiency of the tunnel drainage systems. The evaluation of the hydrogeological risk linked to the underground excavation is very complex, either for the large number of variables involved or for the lack of data available during the planning stage. The study is aimed to quantify the influence of some geo-structural parameters (i.e. discontinuities dip and dip direction) on the tunnel drainage process, comparing the traditional analytical method to the modeling approach, with specific reference to the case of anisotropic rock masses. To forecast the tunnel inflows, a few Authors suggest analytic formulations (Goodman et al., 1965; Knutsson et al., 1996; Ribacchi et al., 2002; Park et al., 2008; Perrochet et al., 2007; Cesano et al., 2003; Hwang et al., 2007), valid for infinite, homogeneous and isotropic aquifer, in which the permeability value is given as a modulus of equivalent hydraulic conductivity Keq. On the contrary, in discontinuous rock masses the water flow is strongly controlled by joints orientation, by their hydraulic characteristics and by rocks fracturing conditions. The analytic equations found in the technical literature could be very useful, but often they don't reflect the real phenomena of the tunnel inflow in rock masses. Actually, these equations are based on the hypothesis of homogeneous aquifer, and then they don't give good agreement for an heterogeneous fractured medium. In this latter case, the numerical modelling could provide the best results, but only with a detailed conceptual model of the water circulation, high costs and long simulation times. Therefore, the integration of analytic method and numerical modeling is very important to adapt the analytic formula to the specific hydrogeological structure. The study was carried out through a parametrical modeling, so that groundwater flow was simulated with the DEM Model UDEC 2D, considering different geometrical (tunnel depth and radius) and hydrogeological settings (piezometrical). The influence of geo-structural setting (as dip and dip direction of discontinuities, with reference to their permeability) on tunnel drainage process was quantified. The simulations are aimed to create a sufficient data set of tunnel inflows, in different geological-structural setting, enabling a quantitative comparison between numerical and the well-known analytic formulas (i.e. Goodman and El Tani equations). Results of this comparison point out the following aspects: - the geological-structural setting critical for hydrogeological risk in tunnel corresponds to joints having low dip (close to 0°) that favour the drainage processes and the increasing of the tunnel inflow; - the rock mass anisotropy strongly influences both the tunnel inflow and the water table drawdown; - the reliability of analytic formulas for the tunnel inflow assessment in discontinuous rock masses depends on the geostractural setting; actually the analytic formulas overestimate the tunnel inflow and this overestimation is bigger for geostructural setting having discontinuities with higher dips. Finally, using the results of parametrical modeling, the previous cited analytic formulas were corrected to point out an empirical equation that gives the tunnel inflow as a function of the different geological-structural setting, with particular regard to: - the horizontal component of discontinuities, - the hydraulic conductivity anisotropy ratio, - the orientation of the hydraulic conductivity tensor. The obtained empirical equation allows a first evaluation of the tunnel inflow, in which joint characteristics are taken into account, very useful to identify the areas where in-depth studies are required. References Cesano D., Bagtzoglou A.C., Olofsson B. (2003). Quantifying fractured rock hydraulic heterogeneity and groundwater inflow prediction in underground excavations: the heterogeneity index. Tunneling and Underground Space Technology, 18, pp. 19-34. El Tani M. (2003). Circular tunnel in a semi-infinite aquifer. Tunnelling and Groundwater Space Technology, 18, pp. 49-55. Goodman R.E., Moye D.G., Van Schalkwyk A., Javandel I. (1965). Ground water inflow during tunnel driving. Eng. Geol., 2, pp. 39-56. Hwang J-H., Lu C-C. (2007). A semi-analytical method for analyzing the tunnel water inflow. Tunneling and Underground Space Technology, 22, pp. 39-46. Itasca (2001). UDEC, User's guide. Itasca Consultino Group Inc., Minneapolis, Minnesota. Knutsson G., Olofsson B., Cesano D. (1996). Prognosis of groundwater inflows and drawdown due to the construction of rock tunnels in heterogeneous media. Res. Proj. Rep. Kungl Tekniska, Stokholm. Park K-H., Owatsiriwong A., Lee G-G. (2008). Analytical solution for steady-state groundwater inflow into a drained circular tunnel in a semi-infinite aquifer: a revisit. Tunnelling and Underground Space Technology, 23, pp. 206-209. Perrochet P., Dematteis A. (2007). Modelling Transient Discharge into a Tunnel Drilled in Heterogeneous Formation. Ground Water, 45(6), pp. 786-790.
Dynamic response of gold nanoparticle chemiresistors to organic analytes in aqueous solution.
Müller, Karl-Heinz; Chow, Edith; Wieczorek, Lech; Raguse, Burkhard; Cooper, James S; Hubble, Lee J
2011-10-28
We investigate the response dynamics of 1-hexanethiol-functionalized gold nanoparticle chemiresistors exposed to the analyte octane in aqueous solution. The dynamic response is studied as a function of the analyte-water flow velocity, the thickness of the gold nanoparticle film and the analyte concentration. A theoretical model for analyte limited mass-transport is used to model the analyte diffusion into the film, the partitioning of the analyte into the 1-hexanethiol capping layers and the subsequent swelling of the film. The degree of swelling is then used to calculate the increase of the electron tunnel resistance between adjacent nanoparticles which determines the resistance change of the film. In particular, the effect of the nonlinear relationship between resistance and swelling on the dynamic response is investigated at high analyte concentration. Good agreement between experiment and the theoretical model is achieved. This journal is © the Owner Societies 2011
Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill
2014-10-01
Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight-normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. © 2014 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc.
Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill
2014-01-01
Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight–normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. Environ Toxicol Chem 2014;33:2386–2398. PMID:24944000
Semi-analytical model of cross-borehole flow experiments for fractured medium characterization
NASA Astrophysics Data System (ADS)
Roubinet, D.; Irving, J.; Day-Lewis, F. D.
2014-12-01
The study of fractured rocks is extremely important in a wide variety of research fields where the fractures and faults can represent either rapid access to some resource of interest or potential pathways for the migration of contaminants in the subsurface. Identification of their presence and determination of their properties are critical and challenging tasks that have led to numerous fracture characterization methods. Among these methods, cross-borehole flowmeter analysis aims to evaluate fracture connections and hydraulic properties from vertical-flow-velocity measurements conducted in one or more observation boreholes under forced hydraulic conditions. Previous studies have demonstrated that analysis of these data can provide important information on fracture connectivity, transmissivity, and storativity. Estimating these properties requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. Quantitative analysis of cross-borehole flowmeter experiments, in particular, requires modeling formulations that: (i) can be adapted to a variety of fracture and experimental configurations; (ii) can take into account interactions between the boreholes because their radii of influence may overlap; and (iii) can be readily cast into an inversion framework that allows for not only the estimation of fracture hydraulic properties, but also an assessment of estimation error. To this end, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. Our model addresses the above needs and provides a flexible and computationally efficient semi-analytical framework having strong potential for future adaptation to more complex configurations. The proposed modeling approach is demonstrated in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as in the context of field-data analysis for fracture connectivity and estimation of corresponding hydraulic properties.
An evaluation of nitrogen and phosphorus responses to rain events in a forested watershed
NASA Astrophysics Data System (ADS)
Steadman, C.; Argerich, A.; Bladon, K. D.; Johnson, S. L.
2017-12-01
Nitrogen (N) and phosphorus (P) exhibit differential responses to storm events which reflect complex, hydrologically-driven biogeochemical activity in a watershed. However, the magnitude of the responses change throughout the year indicating that they may be strongly influenced by antecedent precipitation conditions. To evaluate N and P responses to storms, we collected storm samples from four subwatersheds in a small forested watershed over a 12-month period as well as climate and hydrologic data. We quantified dissolved nitrate (NO3-), ammonium (NH4+), total dissolved nitrogen (TDN), soluble reactive phosphorus (SRP), and total dissolved phosphorus (TDP) concentrations and exports in 300 samples and examined responses across subwatersheds and storms. To assess the influence of potential drivers, we generated a series of models with discharge, instantaneous rain, and cumulative rain as explanatory variables for analyte concentrations. We also constructed models with cumulative rain as the explanatory variable for analyte exports. There was strong evidence (p < .05) that cumulative rain or the cumulative rain-subwatershed interaction were important for all analyte exports and concentrations. In contrast, evidence was weak for the significance of instantaneous rain for any analyte concentrations while discharge or the discharge-subwatershed interaction was significant for NO3- and NH4+, respectively. Of all factors, cumulative rain was the most relevant to explain analyte concentrations (i.e., showed the highest pseudo-R2), except for NH4+, for which discharge was more relevant. There was significant spatial and temporal variability for all analyte concentrations with the exception of NH4+, which showed little variability storm-to-storm. Maximum NO3- concentration occurred at the onset of the wet season while SRP had the lowest concentration during the same time period. Differential responses of analytes evidence distinct influences of hydrologically-driven biogeochemical activity on individual analytes. However, strong correlations with cumulative rain suggest that insight may be gained through consideration of coarser factors such as antecedent precipitation conditions which may serve to integrate complexities of the hillslope, improving understanding of N and P variability.
A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI
Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.
2016-01-01
Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244
A dimensionless approach for the runoff peak assessment: effects of the rainfall event structure
NASA Astrophysics Data System (ADS)
Gnecco, Ilaria; Palla, Anna; La Barbera, Paolo
2018-02-01
The present paper proposes a dimensionless analytical framework to investigate the impact of the rainfall event structure on the hydrograph peak. To this end a methodology to describe the rainfall event structure is proposed based on the similarity with the depth-duration-frequency (DDF) curves. The rainfall input consists of a constant hyetograph where all the possible outcomes in the sample space of the rainfall structures can be condensed. Soil abstractions are modelled using the Soil Conservation Service method and the instantaneous unit hydrograph theory is undertaken to determine the dimensionless form of the hydrograph; the two-parameter gamma distribution is selected to test the proposed methodology. The dimensionless approach is introduced in order to implement the analytical framework to any study case (i.e. natural catchment) for which the model assumptions are valid (i.e. linear causative and time-invariant system). A set of analytical expressions are derived in the case of a constant-intensity hyetograph to assess the maximum runoff peak with respect to a given rainfall event structure irrespective of the specific catchment (such as the return period associated with the reference rainfall event). Looking at the results, the curve of the maximum values of the runoff peak reveals a local minimum point corresponding to the design hyetograph derived according to the statistical DDF curve. A specific catchment application is discussed in order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall structures with respect to observed rainfall events; finally their effects on the hydrograph peak are examined.
Load Diffusion in Composite and Smart Structures
NASA Technical Reports Server (NTRS)
Horgan, C. O.
2003-01-01
The research carried out here builds on our previous NASA supported research on the general topic of edge effects and load diffusion in composite structures. Further fundamental solid mechanics studies were carried out to provide a basis for assessing the complicated modeling necessary for the multi-functional large scale structures used by NASA. An understanding of the fundamental mechanisms of load diffusion in composite subcomponents is essential in developing primary composite structures. Some specific problems recently considered were those of end effects in smart materials and structures, study of the stress response of pressurized linear piezoelectric cylinders for both static and steady rotating configurations, an analysis of the effect of pre-stressing and pre-polarization on the decay of end effects in piezoelectric solids and investigation of constitutive models for hardening rubber-like materials. Our goal in the study of load diffusion is the development of readily applicable results for the decay lengths in terms of non-dimensional material and geometric parameters. Analytical models of load diffusion behavior are extremely valuable in building an intuitive base for developing refined modeling strategies and assessing results from finite element analyses. The decay behavior of stresses and other field quantities provides a significant aid towards this process. The analysis is also amenable to parameter study with a large parameter space and should be useful in structural tailoring studies. Special purpose analytical models of load diffusion behavior are extremely valuable in building an intuitive base for developing refined modeling strategies and in assessing results from general purpose finite element analyses. For example, a rational basis is needed in choosing where to use three-dimensional to two-dimensional transition finite elements in analyzing stiffened plates and shells. The decay behavior of stresses and other field quantities furnished by this research provides a significant aid towards this element transition issue. A priori knowledge of the extent of boundary-layers induced by edge effects is also useful in determination of the instrumentation location in structural verification tests or in material characterization tests.
NASA Astrophysics Data System (ADS)
Samborski, Sylwester; Valvo, Paolo S.
2018-01-01
The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Messner, M. C.; Sham, T. -L.
The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate an SMT data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. This methodology should minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, analytical studies and evaluation of thermomechanical test results continuedmore » in FY17. This report presents the results of those studies. An EPP strain limits methodology assessment was based on recent two-bar thermal ratcheting test results on 316H stainless steel in the temperature range of 405 to 7050C. Strain range predictions from the EPP evaluation of the two-bar tests were also evaluated and compared with the experimental results. The role of sustained primary loading on cyclic life was assessed using the results of pressurized SMT data from tests on Alloy 617 at 9500C. A viscoplastic material model was used in an analytic simulation of two-bar tests to compare with EPP strain limits assessments using isochronous stress strain curves that are consistent with the viscoplastic material model. A finite element model of a prior 304H stainless steel Oak Ridge National Laboratory (ORNL) nozzle-to-sphere test was developed and used for an EPP strain limits and creep-fatigue code case damage evaluations. A theoretical treatment of a recurring issue with convergence criteria for plastic shakedown illustrated the role of computer machine precision in EPP calculations.« less
Martinez, N. E.; Sharp, J. L.; Kuhne, W. W.; ...
2015-11-23
Here, reflectance spectroscopy is a rapid and non-destructive analytical technique that may be used for assessing plant stress, and has potential applications for use in remediation. Changes in reflectance such as that due to metal stress may occur before damage is visible, and existing studies have shown that metal stress does cause changes in plant reflectance. To further investigate the potential use of reflectance spectroscopy as a method for assessing metal stress in plants, an exploratory study was conducted in which Arabidopsis thaliana plants were treated twice weekly in a laboratory setting with varying levels (0, 0.5, or 5 mMmore » (millimolar)) of caesium chloride (CsCl) solution, and reflectance spectra were collected every week for three weeks using an Analytical Spectral Devices FieldSpec Pro spectroradiometer with both a contact probe (CP) and a field of view (FOV) probe at 36.8 and 66.7 cm, respectively, above the plant. Plants were harvested each week after spectra collection for determination of relative water content and chlorophyll content. A visual assessment of the plants was also conducted using point observations on a uniform grid of 81 points. A mixed-effects model analysis was conducted for each vegetation index (VI) considered to determine the effects of length of treatment, treatment level, view with which spectra were acquired, and the interactions of these terms. Two-way analyses of variance (ANOVAs) were performed on the aforementioned endpoints (e.g. chlorophyll content) to determine the significance of the effects of treatment level and length of treatment. Multiple linear regression (MLR) was used to develop a predictive model for each endpoint, considering VI acquired at each view (CP, high FOV, and low FOV). Of the 14 VI considered, 8 were included in the MLR models. Contact probe readings and FOV readings differed significantly, but FOV measurements were generally consistent at each height.« less
Recent evaluations of crack-opening-area in circumferentially cracked pipes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, S.; Brust, F.; Ghadiali, N.
1997-04-01
Leak-before-break (LBB) analyses for circumferentially cracked pipes are currently being conducted in the nuclear industry to justify elimination of pipe whip restraints and jet shields which are present because of the expected dynamic effects from pipe rupture. The application of the LBB methodology frequently requires calculation of leak rates. The leak rates depend on the crack-opening area of the through-wall crack in the pipe. In addition to LBB analyses which assume a hypothetical flaw size, there is also interest in the integrity of actual leaking cracks corresponding to current leakage detection requirements in NRC Regulatory Guide 1.45, or for assessingmore » temporary repair of Class 2 and 3 pipes that have leaks as are being evaluated in ASME Section XI. The objectives of this study were to review, evaluate, and refine current predictive models for performing crack-opening-area analyses of circumferentially cracked pipes. The results from twenty-five full-scale pipe fracture experiments, conducted in the Degraded Piping Program, the International Piping Integrity Research Group Program, and the Short Cracks in Piping and Piping Welds Program, were used to verify the analytical models. Standard statistical analyses were performed to assess used to verify the analytical models. Standard statistical analyses were performed to assess quantitatively the accuracy of the predictive models. The evaluation also involved finite element analyses for determining the crack-opening profile often needed to perform leak-rate calculations.« less
Advances in NMR Spectroscopy for Lipid Oxidation Assessment
USDA-ARS?s Scientific Manuscript database
Although there are many analytical methods developed for the assessment of lipid oxidation, different analytical methods often give different, sometimes even contradictory, results. The reason for this inconsistency is that although there are many different kinds of oxidation products, most methods ...
Fent, Kenneth W.; Gaines, Linda G. Trelles; Thomasen, Jennifer M.; Flack, Sheila L.; Ding, Kai; Herring, Amy H.; Whittaker, Stephen G.; Nylander-French, Leena A.
2009-01-01
We conducted a repeated exposure-assessment survey for task-based breathing-zone concentrations (BZCs) of monomeric and polymeric 1,6-hexamethylene diisocyanate (HDI) during spray painting on 47 automotive spray painters from North Carolina and Washington State. We report here the use of linear mixed modeling to identify the primary determinants of the measured BZCs. Both one-stage (N = 98 paint tasks) and two-stage (N = 198 paint tasks) filter sampling was used to measure concentrations of HDI, uretidone, biuret, and isocyanurate. The geometric mean (GM) level of isocyanurate (1410 μg m−3) was higher than all other analytes (i.e. GM < 7.85 μg m−3). The mixed models were unique to each analyte and included factors such as analyte-specific paint concentration, airflow in the paint booth, and sampler type. The effect of sampler type was corroborated by side-by-side one- and two-stage personal air sampling (N = 16 paint tasks). According to paired t-tests, significantly higher concentrations of HDI (P = 0.0363) and isocyanurate (P = 0.0035) were measured using one-stage samplers. Marginal R2 statistics were calculated for each model; significant fixed effects were able to describe 25, 52, 54, and 20% of the variability in BZCs of HDI, uretidone, biuret, and isocyanurate, respectively. Mixed models developed in this study characterize the processes governing individual polyisocyanate BZCs. In addition, the mixed models identify ways to reduce polyisocyanate BZCs and, hence, protect painters from potential adverse health effects. PMID:19622637
Li, Maozhong; Du, Yunai; Wang, Qiyue; Sun, Chunmeng; Ling, Xiang; Yu, Boyang; Tu, Jiasheng; Xiong, Yerong
2016-01-01
As the essential components in formulations, pharmaceutical excipients directly affect the safety, efficacy, and stability of drugs. Recently, safety incidents of pharmaceutical excipients posing seriously threats to the patients highlight the necessity of controlling the potential risks. Hence, it is indispensable for the industry to establish an effective risk assessment system of supply chain. In this study, an AHP-fuzzy comprehensive evaluation model was developed based on the analytic hierarchy process and fuzzy mathematical theory, which quantitatively assessed the risks of supply chain. Taking polysorbate 80 as the example for model analysis, it was concluded that polysorbate 80 for injection use is a high-risk ingredient in the supply chain compared to that for oral use to achieve safety application in clinic, thus measures should be taken to control and minimize those risks.
Li, Maozhong; Du, Yunai; Wang, Qiyue; Sun, Chunmeng; Ling, Xiang; Yu, Boyang; Tu, Jiasheng; Xiong, Yerong
2016-04-01
As the essential components in formulations, pharmaceutical excipients directly affect the safety, efficacy, and stability of drugs. Recently, safety incidents of pharmaceutical excipients posing seriously threats to the patients highlight the necessity of controlling the potential risks. Hence, it is indispensable for the industry to establish an effective risk assessment system of supply chain. In this study, an AHP-fuzzy comprehensive evaluation model was developed based on the analytic hierarchy process and fuzzy mathematical theory, which quantitatively assessed the risks of supply chain. Taking polysorbate 80 as the example for model analysis, it was concluded that polysorbate 80 for injection use is a high-risk ingredient in the supply chain compared to that for oral use to achieve safety application in clinic, thus measures should be taken to control and minimize those risks.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-01-01
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-02-08
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.
Assessment of catchments' flooding potential: a physically-based analytical tool
NASA Astrophysics Data System (ADS)
Botter, G.; Basso, S.; Schirmer, M.
2016-12-01
The assessment of the flooding potential of river catchments is critical in many research and applied fields, ranging from river science and geomorphology to urban planning and the insurance industry. Predicting magnitude and frequency of floods is key to prevent and mitigate the negative effects of high flows, and has therefore long been the focus of hydrologic research. Here, the recurrence intervals of seasonal flow maxima are estimated through a novel physically-based analytic approach, which links the extremal distribution of streamflows to the stochastic dynamics of daily discharge. An analytical expression of the seasonal flood-frequency curve is provided, whose parameters embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which expresses catchment saturation prior to rainfall events, needs to be calibrated on the observed maxima. The method has been tested in a set of catchments featuring heterogeneous daily flow regimes. The model is able to reproduce characteristic shapes of flood-frequency curves emerging in erratic and persistent flow regimes and provides good estimates of seasonal flow maxima in different climatic regions. Performances are steady when the magnitude of events with return times longer than the available sample size is estimated. This makes the approach especially valuable for regions affected by data scarcity.
Interlaminar shear stress effects on the postbuckling response of graphite-epoxy panels
NASA Technical Reports Server (NTRS)
Engelstad, S. P.; Knight, N. F., Jr.; Reddy, J. N.
1990-01-01
The influence of shear flexibility on overall postbuckling response was assessed, and transverse shear stress distributions in relation to panel failure were examined. Nonlinear postbuckling results are obtained for finite element models based on classical laminated plate theory and first-order shear deformation theory. Good correlation between test and analysis is obtained. The results presented analytically substantiate the experimentally observed failure mode.
Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw; Robert Vatalaro
2005-01-01
This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...
Prediction of Composite Pressure Vessel Failure Location using Fiber Bragg Grating Sensors
NASA Technical Reports Server (NTRS)
Kreger, Steven T.; Taylor, F. Tad; Ortyl, Nicholas E.; Grant, Joseph
2006-01-01
Ten composite pressure vessels were instrumented with fiber Bragg grating sensors in order to assess the strain levels of the vessel under various loading conditions. This paper and presentation will discuss the testing methodology, the test results, compare the testing results to the analytical model, and present a possible methodology for predicting the failure location and strain level of composite pressure vessels.
Functional data analysis for dynamical system identification of behavioral processes.
Trail, Jessica B; Collins, Linda M; Rivera, Daniel E; Li, Runze; Piper, Megan E; Baker, Timothy B
2014-06-01
Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate 2 innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Modeling landslide recurrence in Seattle, Washington, USA
Salciarini, Diana; Godt, Jonathan W.; Savage, William Z.; Baum, Rex L.; Conversini, Pietro
2008-01-01
To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.
The Meteoroid Fluence at Mars Due to Comet C/2013 A1 (Siding Spring)
NASA Technical Reports Server (NTRS)
Moorhead, A.; Wiegert, P.; Blaauw, R.; McCarty, C.; Kingery, A.; Cooke, W.
2014-01-01
Long-period comet C/2013 A1 (Siding Spring) will experience a close encounter with Mars on 2014 Oct 19. A collision between the comet and the planet has been ruled out, but the comet's coma may envelop Mars and its man-made satellites. By the time of the close encounter, five operational spacecraft will be present near Mars. Characterizing the coma is crucial for assessing the risk posed to these satellites by meteoroid impacts. We present an analytic model of cometary comae that describes the spatial and size distributions of cometary dust and meteoroids. This model correctly reproduces, to within an order of magnitude, the number of impacts recorded by Giotto near 1P/Halley [1] and by Stardust near comet 81P/Wild 2 [2]. Applied to Siding Spring, our model predicts a total particle fluence near Mars of 0.02 particles per square meter. In order to determine the degree to which Siding Spring's coma deviates from a sphere, we perform numerical simulations which take into account both gravitational effects and radiative forces. We take the entire dust component of the coma and tail continuum into account by simulating the ejection and evolution of dust particles from comet Siding Spring. The total number of particles simulated is essentially a free parameter and does not provide a check on the total fluence. Instead, these simulations illustrate the degree to which the coma of Siding Spring deviates from the perfect sphere described by our analytic model (see Figure). We conclude that our analytic model sacrifices less than an order of magnitude in accuracy by neglecting particle dynamics and radiation pressure and is thus adequate for order-of-magnitude fluence estimates. Comet properties may change unpredictably and therefore an analytic coma model that enables quick recalculation of the meteoroid fluence is highly desirable. NASA's Meteoroid Environment Office is monitoring comet Siding Spring and taking measurements of cometary brightness and dust production. We will discuss our coma model and nominal fluence taking the latest observations into account.
Temporal Learning Analytics for Adaptive Assessment
ERIC Educational Resources Information Center
Papamitsiou, Zacharoula; Economides, Anastasios A.
2014-01-01
Accurate and early predictions of student performance could significantly affect interventions during teaching and assessment, which gradually could lead to improved learning outcomes. In our research, we seek to identify and formalize temporal parameters as predictors of performance ("temporal learning analytics" or TLA) and examine…
SPECIATION OF ARSENIC IN EXPOSURE ASSESSMENT MATRICES
The speciaton of arsenic in water, food and urine are analytical capabilities which are an essential part in arsenic risk assessment. The cancer risk associated with arsenic has been the driving force in generating the analytical research in each of these matrices. This presentat...
NASA Astrophysics Data System (ADS)
Gong, YanJun; Wu, ZhenSen; Wang, MingJun; Cao, YunHua
2010-01-01
We propose an analytical model of Doppler power spectra in backscatter from arbitrary rough convex quadric bodies of revolution (whose lateral surface is a quadric) rotating around axes. In the global Cartesian coordinate system, the analytical model deduced is suitable for general convex quadric body of revolution. Based on this analytical model, the Doppler power spectra of cones, cylinders, paraboloids of revolution, and sphere-cones combination are proposed. We analyze numerically the influence of geometric parameters, aspect angle, wavelength and reflectance of rough surface of the objects on the broadened spectra because of the Doppler effect. This analytical solution may contribute to laser Doppler velocimetry, and remote sensing of ballistic missile that spin.
The Purpose of Analytical Models from the Perspective of a Data Provider.
ERIC Educational Resources Information Center
Sheehan, Bernard S.
The purpose of analytical models is to reduce complex institutional management problems and situations to simpler proportions and compressed time frames so that human skills of decision makers can be brought to bear most effectively. Also, modeling cultivates the art of management by forcing explicit and analytical consideration of important…
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.
Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.
1990-01-01
The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.
Bias Assessment of General Chemistry Analytes using Commutable Samples.
Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter
2014-11-01
Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.
NASA Astrophysics Data System (ADS)
Johnston, Marty; Jalkio, Jeffrey
2013-04-01
By the time students have reached the intermediate level physics courses they have been exposed to a broad set of analytical, experimental, and computational skills. However, their ability to independently integrate these skills into the study of a physical system is often weak. To address this weakness and assess their understanding of the underlying physical concepts we have introduced laboratory homework into lecture based, junior level theoretical mechanics and electromagnetics courses. A laboratory homework set replaces a traditional one and emphasizes the analysis of a single system. In an exercise, students use analytical and computational tools to predict the behavior of a system and design a simple measurement to test their model. The laboratory portion of the exercises is straight forward and the emphasis is on concept integration and application. The short student reports we collect have revealed misconceptions that were not apparent in reviewing the traditional homework and test problems. Work continues on refining the current problems and expanding the problem sets.
Application of analytic hierarchy process in a waste treatment technology assessment in Mexico.
Taboada-González, Paul; Aguilar-Virgen, Quetzalli; Ojeda-Benítez, Sara; Cruz-Sotelo, Samantha
2014-09-01
The high per capita generation of solid waste and the environmental problems in major rural communities of Ensenada, Baja California, have prompted authorities to seek alternatives for waste treatment. In the absence of a selection methodology, three technologies of waste treatment with energy recovery (an anaerobic digester, a downdraft gasifier, and a plasma gasifier) were evaluated, taking the broader social, political, economic, and environmental issues into considerations. Using the scientific literature as a baseline, interviews with experts, decision makers and the community, and waste stream studies were used to construct a hierarchy that was evaluated by the analytic hierarchy process. In terms of the criteria, judgments, and assumptions made in the model, the anaerobic digester was found to have the highest rating and should consequently be selected as the waste treatment technology for this area. The study results showed low sensitivity, so alternative scenarios were not considered. The methodology developed in this study may be useful for other governments who wish to assess technologies to select waste treatment.
NASA Technical Reports Server (NTRS)
Aljabri, Abdullah S.
1988-01-01
High speed subsonic transports powered by advanced propellers provide significant fuel savings compared to turbofan powered transports. Unfortunately, however, propfans must operate in aircraft-induced nonuniform flow fields which can lead to high blade cyclic stresses, vibration and noise. To optimize the design and installation of these advanced propellers, therefore, detailed knowledge of the complex flow field is required. As part of the NASA Propfan Test Assessment (PTA) program, a 1/9 scale semispan model of the Gulfstream II propfan test-bed aircraft was tested in the NASA-Lewis 8 x 6 supersonic wind tunnel to obtain propeller flow field data. Detailed radial and azimuthal surveys were made to obtain the total pressure in the flow and the three components of velocity. Data was acquired for Mach numbers ranging from 0.6 to 0.85. Analytical predictions were also made using a subsonic panel method, QUADPAN. Comparison of wind-tunnel measurements and analytical predictions show good agreement throughout the Mach range.
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Maly, Friedrich E; Fried, Roman; Spannagl, Michael
2014-01-01
INSTAND e.V. has provided Molecular Genetics Multi-Analyte EQA schemes since 2006. EQA participation and performance were assessed from 2006 - 2012. From 2006 to 2012, the number of analytes in the Multi-Analyte EQA schemes rose from 17 to 53. Total number of results returned rose from 168 in January 2006 to 824 in August 2012. The overall error rate was 1.40 +/- 0.84% (mean +/- SD, N = 24 EQA dates). From 2006 to 2012, no analyte was reported 100% correctly. Individual participant performance was analysed for one common analyte, Lactase (LCT) T-13910C. From 2006 to 2012, 114 laboratories participated in this EQA. Of these, 10 laboratories (8.8%) reported at least one wrong result during the whole observation period. All laboratories reported correct results after their failure incident. In spite of the low overall error rate, EQA will continue to be important for Molecular Genetics.
Automated dynamic analytical model improvement for damped structures
NASA Technical Reports Server (NTRS)
Fuh, J. S.; Berman, A.
1985-01-01
A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.
Application of multiplex arrays for cytokine and chemokine profiling of bile.
Kemp, Troy J; Castro, Felipe A; Gao, Yu-Tang; Hildesheim, Allan; Nogueira, Leticia; Wang, Bing-Sheng; Sun, Lu; Shelton, Gloriana; Pfeiffer, Ruth M; Hsing, Ann W; Pinto, Ligia A; Koshiol, Jill
2015-05-01
Gallbladder disease is highly related to inflammation, but the inflammatory processes are not well understood. Bile provides a direct substrate in assessing the local inflammatory response that develops in the gallbladder. To assess the reproducibility of measuring inflammatory markers in bile, we designed a methods study of 69 multiplexed immune-related markers measured in bile obtained from gallstone patients. To evaluate assay performance, a total of 18 bile samples were tested twice within the same plate for each analyte, and the 18 bile samples were tested on two different days for each analyte. We used the following performance parameters: detectability, coefficient of variation (CV), intraclass correlation coefficient (ICC), and percent agreement (concordance among replicate measures above and below detection limit). Furthermore, we examined the association of analyte levels with gallstone characteristics such as type, numbers, and size. All but 3 analytes (Stem Cell Factor, SCF; Thrombopoietin, TPO; sIL-1RI) were detectable in bile. 52 of 69 (75.4%) analytes had detectable levels for at least 50% of the subjects tested. The within-plate CVs were ⩽25% for 53 of 66 (80.3%) detectable analytes, and across-plate CVs were ⩽25% for 32 of 66 (48.5%) detectable analytes. Moreover, 64 of 66 (97.0%) analytes had ICC values of at least 0.8. Lastly, the percent agreement was high between replicates for all of the analytes (median; within plate, 97.2%; across plate, 97.2%). In exploratory analyses, we assessed analyte levels by gallstone characteristics and found that levels for several analytes decreased with increasing size of the largest gallstone per patient. Our data suggest that multiplex assays can be used to reliably measure cytokines and chemokines in bile. In addition, gallstone size was inversely related to the levels of select analytes, which may aid in identifying critical pathways and mechanisms associated with the pathogenesis of gallbladder diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aerodynamic parameter studies and sensitivity analysis for rotor blades in axial flight
NASA Technical Reports Server (NTRS)
Chiu, Y. Danny; Peters, David A.
1991-01-01
The analytical capability is offered for aerodynamic parametric studies and sensitivity analyses of rotary wings in axial flight by using a 3-D undistorted wake model in curved lifting line theory. The governing equations are solved by both the Multhopp Interpolation technique and the Vortex Lattice method. The singularity from the bound vortices is eliminated through the Hadamard's finite part concept. Good numerical agreement between both analytical methods and finite differences methods are found. Parametric studies were made to assess the effects of several shape variables on aerodynamic loads. It is found, e.g., that a rotor blade with out-of-plane and inplane curvature can theoretically increase lift in the inboard and outboard regions respectively without introducing an additional induced drag.
HOST turbine heat transfer program summary
NASA Technical Reports Server (NTRS)
Gladden, Herbert J.; Simoneau, Robert J.
1988-01-01
The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding with the remainder going to analytical efforts. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Keller, J.; Wallen, R.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan
2004-01-01
The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.
Influence of Tension Stiffening on the Flexural Stiffness of Reinforced Concrete Circular Sections
Morelli, Francesco; Amico, Cosimo; Salvatore, Walter; Squeglia, Nunziante; Stacul, Stefano
2017-01-01
Within this paper, the assessment of tension stiffening effects on a reinforced concrete element with circular section subjected to axial and bending loads is presented. To this purpose, an enhancement of an analytical model already present within the actual technical literature is proposed. The accuracy of the enhanced method is assessed by comparing the experimental results carried out in past research and the numerical ones obtained by the model. Finally, a parametric study is executed in order to study the influence of axial compressive force on the flexural stiffness of reinforced concrete elements that are characterized by a circular section, comparing the secant stiffness evaluated at yielding and at maximum resistance, considering and not considering the effects of tension stiffness. PMID:28773028
Influence of Tension Stiffening on the Flexural Stiffness of Reinforced Concrete Circular Sections.
Morelli, Francesco; Amico, Cosimo; Salvatore, Walter; Squeglia, Nunziante; Stacul, Stefano
2017-06-18
Within this paper, the assessment of tension stiffening effects on a reinforced concrete element with the circular sections subjected to axial and bending loads is presented. To this purpose, an enhancement of an analytical model already present within the actual technical literature is proposed. The accuracy of the enhanced method is assessed by comparing the experimental results carried out in past research and the numerical ones obtained by the model. Finally, a parametric study is executed in order to study the influence of axial compressive force on the flexural stiffness of reinforced concrete elements that are characterized by a circular section, comparing the secant stiffness evaluated at yielding and at maximum resistance, considering and not considering the effects of tension stiffness.
Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades
NASA Astrophysics Data System (ADS)
Kenyon, Scott J.; Bromley, Benjamin C.
2017-04-01
We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
Generalisability in economic evaluation studies in healthcare: a review and case studies.
Sculpher, M J; Pang, F S; Manca, A; Drummond, M F; Golder, S; Urdahl, H; Davies, L M; Eastwood, A
2004-12-01
To review, and to develop further, the methods used to assess and to increase the generalisability of economic evaluation studies. Electronic databases. Methodological studies relating to economic evaluation in healthcare were searched. This included electronic searches of a range of databases, including PREMEDLINE, MEDLINE, EMBASE and EconLit, and manual searches of key journals. The case studies of a decision analytic model involved highlighting specific features of previously published economic studies related to generalisability and location-related variability. The case-study involving the secondary analysis of cost-effectiveness analyses was based on the secondary analysis of three economic studies using data from randomised trials. The factor most frequently cited as generating variability in economic results between locations was the unit costs associated with particular resources. In the context of studies based on the analysis of patient-level data, regression analysis has been advocated as a means of looking at variability in economic results across locations. These methods have generally accepted that some components of resource use and outcomes are exchangeable across locations. Recent studies have also explored, in cost-effectiveness analysis, the use of tests of heterogeneity similar to those used in clinical evaluation in trials. The decision analytic model has been the main means by which cost-effectiveness has been adapted from trial to non-trial locations. Most models have focused on changes to the cost side of the analysis, but it is clear that the effectiveness side may also need to be adapted between locations. There have been weaknesses in some aspects of the reporting in applied cost-effectiveness studies. These may limit decision-makers' ability to judge the relevance of a study to their specific situations. The case study demonstrated the potential value of multilevel modelling (MLM). Where clustering exists by location (e.g. centre or country), MLM can facilitate correct estimates of the uncertainty in cost-effectiveness results, and also a means of estimating location-specific cost-effectiveness. The review of applied economic studies based on decision analytic models showed that few studies were explicit about their target decision-maker(s)/jurisdictions. The studies in the review generally made more effort to ensure that their cost inputs were specific to their target jurisdiction than their effectiveness parameters. Standard sensitivity analysis was the main way of dealing with uncertainty in the models, although few studies looked explicitly at variability between locations. The modelling case study illustrated how effectiveness and cost data can be made location-specific. In particular, on the effectiveness side, the example showed the separation of location-specific baseline events and pooled estimates of relative treatment effect, where the latter are assumed exchangeable across locations. A large number of factors are mentioned in the literature that might be expected to generate variation in the cost-effectiveness of healthcare interventions across locations. Several papers have demonstrated differences in the volume and cost of resource use between locations, but few studies have looked at variability in outcomes. In applied trial-based cost-effectiveness studies, few studies provide sufficient evidence for decision-makers to establish the relevance or to adjust the results of the study to their location of interest. Very few studies utilised statistical methods formally to assess the variability in results between locations. In applied economic studies based on decision models, most studies either stated their target decision-maker/jurisdiction or provided sufficient information from which this could be inferred. There was a greater tendency to ensure that cost inputs were specific to the target jurisdiction than clinical parameters. Methods to assess generalisability and variability in economic evaluation studies have been discussed extensively in the literature relating to both trial-based and modelling studies. Regression-based methods are likely to offer a systematic approach to quantifying variability in patient-level data. In particular, MLM has the potential to facilitate estimates of cost-effectiveness, which both reflect the variation in costs and outcomes between locations and also enable the consistency of cost-effectiveness estimates between locations to be assessed directly. Decision analytic models will retain an important role in adapting the results of cost-effectiveness studies between locations. Recommendations for further research include: the development of methods of evidence synthesis which model the exchangeability of data across locations and allow for the additional uncertainty in this process; assessment of alternative approaches to specifying multilevel models to the analysis of cost-effectiveness data alongside multilocation randomised trials; identification of a range of appropriate covariates relating to locations (e.g. hospitals) in multilevel models; and further assessment of the role of econometric methods (e.g. selection models) for cost-effectiveness analysis alongside observational datasets, and to increase the generalisability of randomised trials.
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng
2014-01-01
Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng
2014-04-01
Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.
Scattering from phase-separated vesicles. I. An analytical form factor for multiple static domains
Heberle, Frederick A.; Anghel, Vinicius N. P.; Katsaras, John
2015-08-18
This is the first in a series of studies considering elastic scattering from laterally heterogeneous lipid vesicles containing multiple domains. Unique among biophysical tools, small-angle neutron scattering can in principle give detailed information about the size, shape and spatial arrangement of domains. A general theory for scattering from laterally heterogeneous vesicles is presented, and the analytical form factor for static domains with arbitrary spatial configuration is derived, including a simplification for uniformly sized round domains. The validity of the model, including series truncation effects, is assessed by comparison with simulated data obtained from a Monte Carlo method. Several aspects ofmore » the analytical solution for scattering intensity are discussed in the context of small-angle neutron scattering data, including the effect of varying domain size and number, as well as solvent contrast. Finally, the analysis indicates that effects of domain formation are most pronounced when the vesicle's average scattering length density matches that of the surrounding solvent.« less
High-Throughput Multi-Analyte Luminex Profiling Implicates Eotaxin-1 in Ulcerative Colitis
Coburn, Lori A.; Horst, Sara N.; Chaturvedi, Rupesh; Brown, Caroline T.; Allaman, Margaret M.; Scull, Brooks P.; Singh, Kshipra; Piazuelo, M. Blanca; Chitnavis, Maithili V.; Hodges, Mallary E.; Rosen, Michael J.; Williams, Christopher S.; Slaughter, James C.; Beaulieu, Dawn B.; Schwartz, David A.; Wilson, Keith T.
2013-01-01
Accurate and high-throughput technologies are needed for identification of new therapeutic targets and for optimizing therapy in inflammatory bowel disease. Our aim was to assess multi-analyte protein-based assays of cytokines/chemokines using Luminex technology. We have reported that Luminex-based profiling was useful in assessing response to L-arginine therapy in the mouse model of dextran sulfate sodium colitis. Therefore, we studied prospectively collected samples from ulcerative colitis (UC) patients and control subjects. Serum, colon biopsies, and clinical information were obtained from subjects undergoing colonoscopy for evaluation of UC or for non-UC indications. In total, 38 normal controls and 137 UC cases completed the study. Histologic disease severity and the Mayo Disease Activity Index (DAI) were assessed. Serum and colonic tissue cytokine/chemokine profiles were measured by Luminex-based multiplex testing of 42 analytes. Only eotaxin-1 and G-CSF were increased in serum of patients with histologically active UC vs. controls. While 13 cytokines/chemokines were increased in active UC vs. controls in tissues, only eotaxin-1 was increased in all levels of active disease in both serum and tissue. In tissues, eotaxin-1 correlated with the DAI and with eosinophil counts. Increased eotaxin-1 levels were confirmed by real-time PCR. Tissue eotaxin-1 levels were also increased in experimental murine colitis induced by dextran sulfate sodium, oxazolone, or Citrobacter rodentium, but not in murine Helicobacter pylori infection. Our data implicate eotaxin-1 as an etiologic factor and therapeutic target in UC, and indicate that Luminex-based assays may be useful to assess IBD pathogenesis and to select patients for anti-cytokine/chemokine therapies. PMID:24367513
Park, In-Sun; Park, Jae-Woo
2011-01-30
Total petroleum hydrocarbon (TPH) is an important environmental contaminant that is toxic to human and environmental receptors. However, human health risk assessment for petroleum, oil, and lubricant (POL)-contaminated sites is especially challenging because TPH is not a single compound, but rather a mixture of numerous substances. To address this concern, this study recommends a new human health risk assessment strategy for POL-contaminated sites. The strategy is based on a newly modified TPH fractionation method and includes an improved analytical protocol. The proposed TPH fractionation method is composed of ten fractions (e.g., aliphatic and aromatic EC8-10, EC10-12, EC12-16, EC16-22 and EC22-40). Physicochemical properties and toxicity values of each fraction were newly defined in this study. The stepwise ultrasonication-based analytical process was established to measure TPH fractions. Analytical results were compared with those from the TPH Criteria Working Group (TPHCWG) Direct Method. Better analytical efficiencies in TPH, aliphatic, and aromatic fractions were achieved when contaminated soil samples were analyzed with the new analytical protocol. Finally, a human health risk assessment was performed based on the developed tiered risk assessment framework. Results showed that a detailed quantitative risk assessment should be conducted to determine scientifically and economically appropriate cleanup target levels, although the phase II process is useful for determining the potency of human health risks posed by POL-contamination. Copyright © 2010 Elsevier B.V. All rights reserved.
Inspection of the Math Model Tools for On-Orbit Assessment of Impact Damage Report
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Raju, Ivatury S.; Piascik, Robert S>
2007-01-01
In Spring of 2005, the NASA Engineering Safety Center (NESC) was engaged by the Space Shuttle Program (SSP) to peer review the suite of analytical tools being developed to support the determination of impact and damage tolerance of the Orbiter Thermal Protection Systems (TPS). The NESC formed an independent review team with the core disciplines of materials, flight sciences, structures, mechanical analysis and thermal analysis. The Math Model Tools reviewed included damage prediction and stress analysis, aeroheating analysis, and thermal analysis tools. Some tools are physics-based and other tools are empirically-derived. Each tool was created for a specific use and timeframe, including certification, real-time pre-launch assessments. In addition, the tools are used together in an integrated strategy for assessing the ramifications of impact damage to tile and RCC. The NESC teams conducted a peer review of the engineering data package for each Math Model Tool. This report contains the summary of the team observations and recommendations from these reviews.
Drift-based scrape-off particle width in X-point geometry
NASA Astrophysics Data System (ADS)
Reiser, D.; Eich, T.
2017-04-01
The Goldston heuristic estimate of the scrape-off layer width (Goldston 2012 Nucl. Fusion 52 013009) is reconsidered using a fluid description for the plasma dynamics. The basic ingredient is the inclusion of a compressible diamagnetic drift for the particle cross field transport. Instead of testing the heuristic model in a sophisticated numerical simulation including several physical mechanisms working together, the purpose of this work is to point out basic consequences for a drift-dominated cross field transport using a reduced fluid model. To evaluate the model equations and prepare them for subsequent numerical solution a specific analytical model for 2D magnetic field configurations with X-points is employed. In a first step parameter scans in high-resolution grids for isothermal plasmas are done to assess the basic formulas of the heuristic model with respect to the functional dependence of the scrape-off width on the poloidal magnetic field and plasma temperature. Particular features in the 2D-fluid calculations—especially the appearance of supersonic parallel flows and shock wave like bifurcational jumps—are discussed and can be understood partly in the framework of a reduced 1D model. The resulting semi-analytical findings might give hints for experimental proof and implementation in more elaborated fluid simulations.
Analytic barrage attack model. Final report, January 1986-January 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less
Characterization of Compton-scatter imaging with an analytical simulation method
Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663
Characterization of Compton-scatter imaging with an analytical simulation method
NASA Astrophysics Data System (ADS)
Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.